Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-css3-animation
Packt
18 Nov 2013
7 min read
Save for later

CSS3 Animation

Packt
18 Nov 2013
7 min read
(For more resources related to this topic, see here.) The websites, we see today, are complex and complicated. By complex and complicated, we are referring to the development of these websites and not the webpage itself. We see animations and complex features. Prior to HTML5 and CSS3, JavaScript was used extensively for this purpose. HTML was incorrectly used for styling when it was expected to design the structural markup of the page. However, with the advent of CSS, it is a good practice to use HTML for markup and CSS for styling. CSS3 brings along transforms, transition elements, and animation features that make it easier to develop awesome features. In transition, we can view the change from a single state to other but when it comes to multiple states, Animation is the solution. Let's discuss the various properties of CSS3 Animations and then we will incorporate all of that in a code to understand it better. @keyframes The points at which the transition should take place can be defined using the @keyframes property. As of now, we need to add a vendor prefix to the @keyframes property as it is still in its development state. In future, when it is accepted as a standard, then we do not have to use a vendor prefix. We can use percentage or from and to keywords to implement the change in state from one CSS style to another. animation-name We need to apply animation to an element. This property enables us to do so by applying it to the animation name defined in the keyframes rule. However, it cannot be a standalone property and has to be used in conjunction with other animation properties. animation-duration Using this feature, we can define the duration of the animation. If we specify the animation-duration to 5 seconds, changes in the CSS defined states will need to be completed within 5 seconds. animation-delay Similar to the delay property in transition, the delay feature will delay the animation by the time period specified. animation-timing-function Similar to the timing function, this property decides the speed of transition. It behaves the same way as the transition timing function that we have seen earlier. animation-iteration-count We can decide the number of iteration carried out in the animation phase using this property. Setting this property to infinite will mean that the animation will never stop. animation-direction We can decide the direction of the animation using this property. We can use values like reverse, alternate to define the direction of the element to be animated. animation-play-state Using this feature, we can determine whether the animation would be running or paused accordingly. Now that we had a look at these properties, we will now incorporate some of these properties in a code and understand the functionality in a better way. Hence, to gain a practical insight, let's look at the following code. <!DOCTYPE html> <html> <head> <style> body { background:#000; color:#fff; } #trigger { width:100px; height:100px; position:absolute; top:50%; margin:-50px 0 0 -50px; left:50%; background: black; border-radius:50px; /*set the animation*/ /*[animation name] [animation duration] [animation timing function] [animation delay] [animation iterations count] [animation direction]*/ animation: glowness 5s linear 0s 5 alternate; -moz-animation: glowness 5s linear 0s 5 alternate; /* Firefox */ -webkit-animation: glowness 5s linear 0s 5 alternate; /* Safari and Chrome */ -o-animation: glowness 5s linear 0s 5 alternate; /* Opera */ -ms-animation: glowness 5s linear 0s 5 alternate; /* IE10 */ } #trigger:hover { animation-play-state: paused; -moz-animation-play-state: paused; -webkit-animation-play-state: paused; -o-animation-play-state: paused; -ms-animation-play-state: paused; } /*animation keyframes*/ @keyframes glowness { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-moz-keyframes glowness /* Firefox */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-webkit-keyframes glowness /* Safari and Chrome */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-o-keyframes glowness /* Opera */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-ms-keyframes glowness /* IE10 */ { 0% {box-shadow: 0 0 20px green;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } </style> <script> // animation started (buggy on firefox) $('#trigger').on('animationstart mozanimationstart webkitAnimationStart oAnimationStart msanimationstart',function() { $('p').html('animation started'); }) // animation paused $('#trigger').on('mouseover',function(){ $('p').html('animation paused'); }) // animation re-started $('#trigger').on('mouseout',function(){ $('p').html('animation re-started'); }) // animation ended $('#trigger').on('animationend mozanimationend webkitAnimationEnd oAnimationEnd msanimationend',function() { $('p').html('animation ended'); }) //iteration count var i =0; $('#trigger').on('animationiteration mozanimationiteration webkitAnimationIteration oAnimationIteration msanimationiteration', function() { i++; $('p').html('animation iteration='+i); }) </script> </head> <body> <div id="trigger"></div> </body> </html> The output of the code on execution would be as follows: We have used –webkit as the prefix in this example as we are executing the code in Google Chrome. Please us –moz prefix for Firefox and –o- for Opera. Comments are added in the code so that we can understand it easily. Apart from HTML5 and CSS3, we have used a bit of JQuery. Let’s go through the animation part of the code to understand it better. In the CSS3 styles, we have mentioned the animation direction as alternate as a result of which the animation would be in a different direction after the first iteration. We have used the hover property. In this code, whenever we hover over the object, the animation is paused. We have also defined the glowness of the object in keyframes. We have also mentioned how the colors change and defined a box-shadow attribute for the animation in keyframes. We have defined the <script> tag in which we have included the JavaScript and JQuery code. We have used the trigger attribute. The trigger() method triggers a particular event and the default behavior of an event with regards to the chosen elements. We have used mouseover and mouseout properties. The mouseover and mouseout event fires when the user moves the mouse pointer over an element and out of an element respectively. We have used those events in conjunction with the start, end and pausing of the animation. Therefore, we can create complex animations using CSS3. Coding is an art which gets better with practice. Hence, we need to implement it practically in order to know the subtle nuances of HTML5 and CSS3. However, we can achieve that after a considerable amount of practice. However, we are just on the shore; the sea of knowledge is far beyond. In this article, we have covered a lot of HTML5 and CSS3 features. Instead of wading through loads of theory, the concepts in this article are explained in a practical manner using code samples to demonstrate the new features of HTML5 and CSS3. The code samples are such that you can copy the code (the entire code is written instead of code snippets) and execute it for better understanding. Transition, transformation, and animation are also explained in a lucid manner, and there is a gradual increase in the difficulty level throughout the article. By the end of the book, you will be thoroughly acquainted with HTML5 and CSS3, enabling you to design a web page using the included code samples with ease. Click on the following link to have a look at the book: http://www.packtpub.com/html5-and-css3-for-transition-transformation-animation/book Summary This article has discussed how HTML5 and CSS3 features can be used used in websites. There is a detailed discussion on the animations used in the websites offered by CSS3. Resources for Article: Further resources on this subject: Mobiles First – How and Why [Article] Creating an Animated Gauge with CSS3 [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 1999

article-image-user-interface-design-icefaces-18-part-1
Packt
30 Nov 2009
9 min read
Save for later

User Interface Design in ICEfaces 1.8: Part 1

Packt
30 Nov 2009
9 min read
Before we take a more detailed look at the ICEfaces components, we will discuss the desktop character of modern web applications in this article. Desktop technology  is about 40 years old now and there are best practices that can help us in our web application design. An important part of the user interface design is the page layout. We will have a look at the corresponding design process using a mockup tool. The Facelets example will be extended to show how to implement such a mockup design using Facelets templating. Finally, we will have a look at the production-ready templating of ICEfusion. Revival of the desktop The number of desktop-like web applications is growing faster and faster. The demand for this is not a big surprise. Using full-featured desktops meant that users had to suffer from the limited-user model of the first generation Web. This usage gap is now filled by web applications that mimic desktop behavior. However, there is a difference between the desktop and the Web. Although equipped with desktop-like presentations, web applications have to fulfill different user expectations. So, we have a revival of the desktop metaphor in the Web context but it is mixed with user habits based on the first decade of the Web. Nevertheless, the demand for a purer desktop presentation is already foreseeable. If you primarily followed the traditional web programming model in the past, namely the request-response pattern, you may first have to shift your mind to components and events. If you already have some desktop-programming experience you will discover a lot of similarities. However, you will also recognize how limited the Web 2.0 programming world is in comparison to modern desktops. The difference is understandable because desktop design has a long tradition. The first system was built at the end of the 1960s. There is a lot of experience in this domain. Best of all, we have established rules we can follow. Web design is still a challenge compared to desktop design. Although this article cannot discuss all of the important details of today's desktop design, we will have a quick look at the basics that are applicable to nearly all user interface designs. We can subsume all this with the following question: What makes a software system user-friendly? Software ergonomics Have you ever heard of the ISO standard 9241, Ergonomics of Human System Interaction (http://en.wikipedia.org/wiki/ISO_9241)? This standard describes how a system has to be designed to be human-engineered. There are a lot of aspects in it, from the hardware design for a machine that has to be used by a human to the user interface design of software applications. A poor hardware or interface design can result in not only injury, but also mental distress that leads to wastage of working time. The primary target is to prevent humans from damage. The most important part of ISO 9241 for software developers is part 110, dialog principles. It considers the design of dialogs between humans and information systems with a focus on: Suitability for the task Suitability for learning Suitability for individualization Conformity with user expectations Self-descriptiveness Controllability Error tolerance We will take a deeper look at these later. ISO 9241-110 has its roots in a German industry standard based on research work from the early 1980s. I frst had a look at all this during a study almost 20 years ago. Most interesting with part 110 is the stability of the theoretical model behind it. Independent of the technical advances of the IT industry in the last two decades, we can still apply these standards to modern web application design. Challenges The principles of ISO 9241-110 can help you to get better results, but they only serve as a rule. Even if you follow such principles slavishly, the result will not be automatically valuable. Creating a useful interface is still a challenging business. You have to accept a process of trial and error, ask for customer feedback, and accept a lot of iterations in development before usability becomes your friend. The technical limitations that derive from your framework decisions can be additionally frustrating. The problems that we have with today's AJAX technology are a good example of it, especially if you are already experienced with desktop development and its design rules. Apply Occam's razor Everything should be made as simple as possible, but not simpler. Albert Einstein's quote mentions two important aspects in creative processes that are also true for user interface design: Reduction Oversimplification Reduction Have you ever realized how difficult it is to recognize what is important or necessary, and what is superfluous when you enter a new domain? Often, things seem to be clear and pretty simple at the first sight. However, such a perception is based on experiences that were made outside of the domain. There is a lack of essential experiences in a majority of the cases. You may have to invest several years to get the whole picture and develop an accurate understanding before you come to an adequate decision. If your new domain is user interface design, these findings can help you to understand your customers better. If you keep questioning the eye-catching solutions that come to your mind and try to slip into the customer's role, you  will get better results faster. Oversimplification Oversimplification makes user interfaces more complex to use. This phenomenon arises if the target group for a user interface is defined as less experienced than it is in reality. For advanced users, a beginner's design is more time-consuming to use. In many cases, it is assumed that a bigger part of the users consists of beginners. However, reality shows us that advanced users make up the bigger part, whereas beginners and super users may have a portion of up to 10% each. Designing a user interface for beginners that can be used by all users may be an intuitive idea at first sight, but it is not. You have to consider the advanced users if you want to be successful with your design. This is indeed an essential experience to come to an adequate decision. User interface design principles Besides the aforementioned recommendations, the following are the most influential principles for an adequate interface design: Suitability for the task Self-descriptiveness Controllability Conformity with user expectations Error tolerance Suitability for individualization Suitability for learning Suitability for the task Although it seems to be a trivial requirement, the functionality of a web application seldom delivers what the user requires to fulfill his needs. Additionally, the presentation, navigation, or lingo often does not work for the user or is not well-suited for the function it represents. A good user interface design is based on the customer's lingo. You can write a glossary that describes the meaning of terms you use. A requirements management that results in a detailed use case model can help in implementing the adequate functionality. The iterative development of interactive user interface prototypes to get customer feedback allows finding a suitable presentation and navigation. Self-Descriptiveness Ergonomic applications have an interface design that allows answering the following questions at any time: What is the context I am working in at the moment? What is the next possible step? The answers to these questions become immediately important when a user is, for example, disrupted by a telephone call and continues his work after attending to it. The shorter the time to recognize the last working step, the better the design is. A rule of thumb is to have a caption for every web page that describes its context. Navigational elements, such as buttons, show descriptive text that allows recognizing the function behind it. If possible, separate a page into subsections that also have their captions for a better orientation. Controllability Applications have to offer their functionality in a way that the user can decide for himself when and how the application is fulfilling his requirements. For this, it is important that the application offers different ways to start a function. Beginners may prefer using the mouse to select an entry in a pull-down menu. Advanced users normally work with the keyboard because hotkeys let them use the application faster. It is also important that the user must be able to stop his/her work at any time; for example, for a lunch break or telephone call, without any disadvantages. It is not acceptable that the user has to start the last function again. With web application, this cannot be fulfilled in any case because of security reasons or limited server resources. Conformity with User Expectations User expectations are, maybe, the most important principle, but also the most sophisticated one. The expectations are closely connected to the cultural background of the target group. So, the interface designer has to have a similar socialization. We need to have a look at the use of words of the target language. Although target groups share the same language, certain terms can have different meanings; for example, the correct use of colors or pictures in icon bars is pretty important because we use these in contexts without extra explanation. However, there are cases when a color or an image can mean the opposite of what it was designed for. The behavior of an application can also be a problem when it differs from the standards of real-world processes. The advantage of standardization is an immediate understanding of processing steps, or the correct use of tools without education. If an application does not consider this and varies, the standard users have to rethink every step before they can fulfill their duties. This needs extra energy, is annoying, and is pretty bad for the acceptance of the application in the long run. If we look at the design itself, consistency in presentation, navigation, or form use is another important part. The user expects immutable behavior of the application in similar contexts. Contexts should be learned only once, and the learned ones are reusable in all other occurrences. Following this concept also helps to reuse the visual components during development. So, you have a single implementation for each context that is reused in different web pages.
Read more
  • 0
  • 0
  • 1978

article-image-recording-your-first-test
Packt
24 Apr 2015
17 min read
Save for later

Recording Your First Test

Packt
24 Apr 2015
17 min read
JMeter comes with a built-in test script recorder, also referred to as a proxy server (http://en.wikipedia.org/wiki/Proxy_server), to aid you in recording test plans. The test script recorder, once configured, watches your actions as you perform operations on a website, creates test sample objects for them, and eventually stores them in your test plan, which is a JMX file. In addition, JMeter gives you the option to create test plans manually, but this is mostly impractical for recording nontrivial testing scenarios. You will save a whole lot of time using the proxy recorder, as you will be seeing in a bit. So without further ado, in this article by Bayo Erinle, author of Performance Testing with JMeter - Second Edition, let's record our first test! For this, we will record the browsing of JMeter's own official website as a user will normally do. For the proxy server to be able to watch your actions, it will need to be configured. This entails two steps: Setting up the HTTP(S) Test Script Recorder within JMeter. Setting the browser to use the proxy. (For more resources related to this topic, see here.) Configuring the JMeter HTTP(S) Test Script Recorder The first step is to configure the proxy server in JMeter. To do this, we perform the following steps: Start JMeter. Add a thread group, as follows: Right-click on Test Plan and navigate to Add | Threads (User) | Thread Group. Add the HTTP(S) Test Script Recorder element, as follows: Right-click on WorkBench and navigate to Add | Non-Test Elements | HTTP(S) Test Script Recorder. Change the port to 7000 (1) (under Global Settings). You can use a different port, if you choose to. What is important is to choose a port that is not currently used by an existing process on the machine. The default is 8080. Under the Test plan content section, choose the option Test Plan > Thread Group (2) from the Target Controller drop-down. This allows the recorded actions to be targeted to the thread group we created in step 2. Under the Test plan content section, choose the option Put each group in a new transaction controller (3) from the Grouping drop-down. This allows you to group a series of requests constituting a page load. We will see more on this topic later. Click on Add suggested Excludes (under URL Patterns to Exclude). This instructs the proxy server to bypass recording requests of a series of elements that are not relevant to test execution. These include JavaScript files, stylesheets, and images. Thankfully, JMeter provides a handy button that excludes the often excluded elements. Click on the Start button at the bottom of the HTTP(S) Test Script Recorder component. Accept the Root CA certificate by clicking on the OK button. With these settings, the proxy server will start on port 7000, and monitor all requests going through that port and record them to a test plan using the default recording controller. For details, refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder   In older versions of JMeter (before version 2.10), the now HTTP(S) Test Script Recorder was referred to as HTTP Proxy Server. While we have configured the HTTP(S) Test Script Recorder manually, the newer versions of JMeter (version 2.10 and later) come with prebundled templates that make commonly performed tasks, such as this, a lot easier. Using the bundled recorder template, we can set up the script recorder with just a few button clicks. To do this, click on the Templates…(1) button right next to the New file button on the toolbar. Then select Select Template as Recording (2). Change the port to your desired port (for example, 7000) and click on the Create (3) button. Refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder through the template Recorder Setting up your browser to use the proxy server There are several ways to set up the browser of your choice to use the proxy server. We'll go over two of the most common ways, starting with my personal favorite, which is using a browser extension. Using a browser extension Google Chrome and Firefox have vibrant browser plugin ecosystems that allow you to extend the capabilities of your browser with each plugin that you choose. For setting up a proxy, I really like FoxyProxy (http://getfoxyproxy.org/). It is a neat add-on to the browser that allows you to set up various proxy settings and toggle between them on the fly without having to mess around with setting systems on the machine. It really makes the work hassle free. Thankfully, FoxyProxy has a plugin for Internet Explorer, Chrome, and Firefox. If you are using any of these, you are lucky! Go ahead and grab it! Changing the machine system settings For those who would rather configure the proxy natively on their operating system, we have provided the following steps for Windows and Mac OS. On Windows OS, perform the following steps for configuring a proxy: Click on Start, then click on Control Panel. Click on Network and Internet. Click on Internet Options. In the Internet Options dialog box, click on the Connections tab. Click on the Local Area Network (LAN) Settings button. To enable the use of a proxy server, select the checkbox for Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections), as shown in the following screenshot. In the proxy Address box, enter localhost in the IP address. In the Port number text box, enter 7000 (to match the port you set up for your JMeter proxy earlier). If you want to bypass the proxy server for local IP addresses, select the Bypass proxy server for local addresses checkbox. Click on OK to complete the proxy configuration process. Manually setting proxy on Windows 7 On Mac OS, perform the following steps to configure a proxy: Go to System Preference. Click on Network. Click on the Advanced… button. Go to the Proxies tab. Select the Web Proxy (HTTP) checkbox. Under Web Proxy Server, enter localhost. For port, enter 7000 (to match the port you set up for your JMeter proxy earlier). Do the same for Secure Web Proxy (HTTPS). Click on OK. Manually setting proxy on Mac OS For all other systems, please consult the related operating system documentation. Now that is all out of the way and the connections have been made, let's get to recording using the following steps: Point your browser to http://jmeter.apache.org/. Click on the Changes link under About. Click on the User Manual link under Documentation. Stop the HTTP(S) Test Script Recorder by clicking on the Stop button, so that it doesn't record any more activities. If you have done everything correctly, your actions will be recorded under the test plan. Refer to the following screenshot for details. Congratulations! You have just recorded your first test plan. Admittedly, we have just scrapped the surface of recording test plans, but we are off to a good start. Recording your first scenario Running your first recorded scenario We can go right ahead and replay or run our recorded scenario now, but before that let's add a listener or two to give us feedback on the results of the execution. There is no limit to the amount of listeners we can attach to a test plan, but we will often use only one or two. For our test plan, let's add three listeners for illustrative purposes. Let's add a Graph Results listener, a View Results Tree listener, and an Aggregate Report listener. Each listener gathers a different kind of metric that can help analyze performance test results as follows: Right-click on Test Plan and navigate to Add | Listener | View Results Tree. Right-click on Test Plan and navigate to Add | Listener | Aggregate Report. Right-click on Test Plan and navigate to Add | Listener | Graph Results. Just as we can see more interesting data, let's change some settings at the thread group level, as follows: Click on Thread Group. Under Thread Properties set the values as follows:     Number of Threads (users): 10     Ramp-Up Period (in seconds): 15     Loop Count: 30 This will set our test plan up to run for ten users, with all users starting their test within 15 seconds, and have each user perform the recorded scenario 30 times. Before we can proceed with test execution, save the test plan by clicking on the save icon. Once saved, click on the start icon (the green play icon on the menu) and watch the test run. As the test runs, you can click on the Graph Results listener (or any of the other two) and watch results gathering in real time. This is one of the many features of JMeter. From the Aggregate Report listener, we can deduce that there were 600 requests made to both the changes link and user manual links, respectively. Also, we can see that most users (90% Line) got very good responses below 200 milliseconds for both. In addition, we see what the throughput is per second for the various links and see that there were no errors during our test run. Results as seen through this Aggregate Report listener Looking at the View Results Tree listener, we can see exactly the changes link requests that failed and the reasons for their failure. This can be valuable information to developers or system engineers in diagnosing the root cause of the errors.   Results as seen via the View Results Tree Listener The Graph Results listener also gives a pictorial representation of what is seen in the View Tree listener in the preceding screenshot. If you click on it as the test goes on, you will see the graph get drawn in real time as the requests come in. The graph is a bit self-explanatory with lines representing the average, median, deviation, and throughput. The Average, Median, and Deviation all show average, median, and deviation of the number of samplers per minute, respectively, while the Throughput shows the average rate of network packets delivered over the network for our test run in bits per minute. Please consult a website, for example, Wikipedia for further detailed explanation on the precise meanings of these terms. The graph is also interactive and you can go ahead and uncheck/check any of the irrelevant/relevant data. For example, we mostly care about the average and throughput. Let's uncheck Data, Median, and Deviation and you will see that only the data plots for Average and Throughput remain. Refer to the following screenshot for details. With our little recorded scenario, you saw some major components that constitute a JMeter test plan. Let's record another scenario, this time using another application that will allow us to enter form values. Excilys Bank case study We'll borrow a website created by the wonderful folks at Excilys, a company focused on delivering skills and services in IT (http://www.excilys.com/). It's a light banking web application created for illustrative purposes. Let's start a new test plan, set up the test script recorder like we did previously, and start recording. Results as seen through this Graph Results Listener Let's start with the following steps: Point your browser to http://excilysbank.aws.af.cm/public/login.html. Enter the username and password in the login form, as follows: Username: user1 Password: password1 Click on the PERSONNAL CHECKING link. Click on the Transfers tab. Click on My Accounts. Click on the Joint Checking link. Click on the Transfers tab. Click on the Cards tab. Click on the Operations tab. Click on the Log out button. Stop the proxy server by clicking on the Stop button. This concludes our recorded scenario. At this point, we can add listeners for gathering results of our execution and then replay the recorded scenario as we did earlier. If we do, we will be in for a surprise (that is, if we don't use the bundled recorder template). We will have several failed requests after login, since we have not included the component to manage sessions and cookies needed to successfully replay this scenario. Thankfully, JMeter has such a component and it is called HTTP Cookie Manager. This seemingly simple, yet powerful component helps maintain an active session through HTTP cookies, once our client has established a connection with the server after login. It ensures that a cookie is stored upon successful authentication and passed around for subsequent requests, hence allowing those to go through. Each JMeter thread (that is, user) has its own cookie storage area. That is vital since you won't want a user gaining access to the site under another user's identity. This becomes more apparent when we test for websites requiring authentication and authorization (like the one we just recorded) for multiple users. Let's add this to our test plan by right-clicking on Test Plan and navigating to Add | Config Element | HTTP Cookie Manager. Once added, we can now successfully run our test plan. At this point, we can simulate more load by increasing the number of threads at the thread group level. Let's go ahead and do that. If executed, the test plan will now pass, but this is not realistic. We have just emulated one user, repeating five times essentially. All threads will use the credentials of user1, meaning that all threads log in to the system as user1. That is not what we want. To make the test realistic, what we want is each thread authenticating as a different user of the application. In reality, your bank creates a unique user for you, and only you or your spouse will be privileged to see your account details. Your neighbor down the street, if he used the same bank, won't get access to your account (at least we hope not!). So with that in mind, let's tweak the test to accommodate such a scenario. Parameterizing the script We begin by adding a CSV Data Set Config component (Test Plan | Add | Config Element | CSV Data Set Config) to our test plan. Since it is expensive to generate unique random values at runtime due to high CPU and memory consumption, it is advisable to define that upfront. The CSV Data Set Config component is used to read lines from a file and split them into variables that can then be used to feed input into the test plan. JMeter gives you a choice for the placement of this component within the test plan. You would normally add the component at the HTTP request level of the request that needs values fed from it. In our case, this will be the login HTTP request, where the username and password are entered. Another is to add it at the thread group level, that is, as a direct child of the thread group. If a particular dataset is applied to only a thread group, it makes sense to add it at this level. The third place where this component can be placed is at the Test Plan root level. If a dataset applies to all running threads, then it makes sense to add it at the root level. In our opinion, this also makes your test plans more readable and maintainable, as it is easier to see what is going on when inspecting or troubleshooting a test plan since this component can easily be seen at the root level rather than being deeply nested at other levels. So for our scenario, let's add this at the Test Plan root level. You can always move the components around using drag and drop even after adding them to the test plan. CSV Data Set Config Once added, the Filename entry is all that is needed if you have included headers in the input file. For example, if the input file is defined as follows: user, password, account_id user1, password1, 1 If the Variable Names field is left blank, then JMeter will use the first line of the input file as the variable names for the parameters. In cases where headers are not included, the variable names can be entered here. The other interesting setting here is Sharing mode. By default, this defaults to All threads, meaning all running threads will use the same set of data. So in cases where you have two threads running, Thread1 will use the first line as input data, while Thread2 will use the second line. If the number of running threads exceeds the input data then entries will be reused from the top of the file, provided that Recycle on EOF is set to True (the default). The other options for sharing modes include Current thread group and Current thread. Use the former for cases where the dataset is specific for a certain thread group and the latter for cases where the dataset is specific to each thread. The other properties of the component are self-explanatory and additional information can be found in JMeter's online user guide. Now that the component is added, we need to parameterize the login HTTP request with the variable names defined in our file (or the csvconfig component) so that the values can be dynamically bound during test execution. We do this by changing the value of the username to ${user} and password to ${password}, respectively, on the HTTP login request. The values between the ${} match the headers defined in the input file or the values specified in the Variable Names entry of the CSV Data Set Config component. Binding parameter values for HTTP requests We can now run our test plan and it should work as earlier, only this time the values are dynamically bound through the configuration we have set up. So far, we have run for a single user. Let's increase the thread group properties and run for ten users, with a ramp-up of 30 seconds, for one iteration. Now let's rerun our test. Examining the test results, we notice some requests failed with a status code of 403 (http://en.wikipedia.org/wiki/HTTP_403), which is an access denied error. This is because we are trying to access an account that does not seem to be the logged-in user. In our sample, all users made a request for account number 4, which only one user (user1) is allowed to see. You can trace this by adding a View Tree listener to the test plan and returning the test. If you closely examine some of the HTTP requests in the Request tab of the View Results Treelistener, you'll notice requests as follows: /private/bank/account/ACC1/operations.html /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json … Observant readers would have noticed that our input data file also contains an account_id column. We can leverage this column so that we can parameterize all requests containing account numbers to pick the right accounts for each logged-in user. To do this, consider the following line of code: /private/bank/account/ACC1/operations.html Change this to the following line of code: /private/bank/account/ACC${account_id}/operations.html Now, consider the following line of code: /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json Change this to the following line of code: /private/bank/account/ACC${account_id}/year/2013/month/1/page/0/operations.json Make similar changes to the rest of the code. Go ahead and do this for all such requests. Once completed, we can now rerun our test plan and, this time, things are logically correct and will work fine. You can also verify that if all works as expected after the test execution by examining the View Results Tree listener, clicking on some account requests URL, and changing the response display from text to HTML, you should see an account other than ACCT1. Summary We have covered quite a lot in this article. You learned how to configure JMeter and our browsers to help record test plans. In addition, you learned about some built-in components that can help us feed data into our test plan and/or extract data from server responses. Resources for Article:   Further resources on this subject: Execution of Test Plans [article] Performance Testing Fundamentals [article] Data Acquisition and Mapping [article]
Read more
  • 0
  • 0
  • 1972

article-image-netbeans-platform-69-working-window-system
Packt
17 Aug 2010
12 min read
Save for later

NetBeans Platform 6.9: Working with Window System

Packt
17 Aug 2010
12 min read
(For more resources on NetBeans, see here.) Window System Large desktop applications need to provide many different views for visualizing data. These views have to be managed and shown and the NetBeans Platform handles these requirements for you out of the box via its docking framework. While it once might have been sufficient for a docking framework to provide static fixed window layouts, today the user expects far more flexibility. Windows should be able to be opened, movable, and, generally, customizable at runtime. The user tends to assume that the positions of views are modifiable and that they persist across restarts of the application. Not only that, but applications are assumed to be so fiexible that views should be detachable from the application's main window, enabling them to be displayed on multiple monitors at the same time. While once the simple fact of the availability of menus and toolbars was sufficient, today a far more dynamic handling is needed so that window content can be adapted dynamically. Connected to these expectations of flexibility, plugins are increasingly becoming a standard technology, with the user assuming their windows to be pluggable, too. In short, the requirements for window management have become quite complex and can only be met by means of an external docking framework, otherwise all these various concerns would need to be coded (and debugged, tested, and maintained) by hand. The NetBeans Platform provides all of these features via its docking framework, known as the NetBeans Window System. It also provides an API to let you programmatically access the window system. Together, the window system and its API fulfill all the requirements described above, letting you concentrate on your domain knowledge and business logic rather than on the work of creating a custom window management facility for each of your applications. This part of the article teaches you the following: How to define views How to position views in the main window Rest is covered in the second part of this article series. Creating a window The NetBeans Window System simplifies window management by letting you use a default component for displaying windows. The default component, that is, the superclass of all windows, is the TopComponent class, which is derived from the standard JComponent class. It defines many methods for controlling a window and handles notification of main window system events. The WindowManager is the central class controlling all the windows in the application. Though you can implement this class yourself, this is seldom done as normally the default WindowManager is sufficient. Similarly, you typically use the standard TopComponent class, rather than creating your own top-level Swing components. In contrast to the TopComponent class, the default WindowManager cannot manage your own top-level Swing components, so these cannot take advantage of the Window System API. Now let's create a TopComponent and let it be an editor for working with tasks. This is done easily by using the New Window wizard. In the Projects window, right-click the TaskEditor module project node and choose New | Window. On the first page of the wizard select Editor for Window Position and Open on Application Start. Click Next. In the next page of the wizard, type TaskEditor in Class Name Prefix. This prefix is used for all the generated files. It is possible to specify an icon that will be displayed in the tab of the new window, but let's skip that for the moment. Click Finish and all the files are generated into your module source structure. Next, open the newly created TaskEditorTopComponent and drag the TaskEditorPanel from the Palette, which is where you put it at the end of the last chapter, onto the form. The size of the component automatically adjusts to the required size of he panel. Position the panel with the preferred spacing to the left and top and activate the automatic resizing of the panel in horizontal and vertical direction. The form should now look similar to the following screenshot: Start the application. You now see a tab containing the new TaskEditor Window, which holds your form. Examining the generated files You have used a wizard to create a new TopComponent. However, the wizard did more than that. Let's take a look at all the files that have been created and at all the files that have been modified, as well as how these files work together. The only Java class that was generated is the TopComponent that will contain the TaskEditor, shown as follows: @ConvertAsProperties(dtd = "-//com.netbeansrcp.taskeditor//TaskEditor// EN", autostore = false) public final class TaskEditorTopComponent extends TopComponent { private static TaskEditorTopComponent instance; /** path to the icon used by the component and its open action */ // static final String ICON_PATH = "SET/PATH/TO/ICON/HERE"; private static final String PREFERRED_ID = "TaskEditorTopComponent"; public TaskEditorTopComponent() { initComponents(); setName(NbBundle.getMessage(TaskEditorTopComponent.class, "CTL_TaskEditorTopComponent")); setToolTipText(NbBundle.getMessage(TaskEditorTopComponent.class, "HINT_TaskEditorTopComponent")); // setIcon(ImageUtilities.loadImage(ICON_PATH, true)); } /**This method is called from within the constructor to * initialize the form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { javax.swing.GroupLayout layout = new javax.swing. GroupLayout(this); this.setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout. Alignment.LEADING).addGap(0, 555, Short.MAX_VALUE)); layout.setVerticalGroup(layout.createParallelGroup( javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 442, Short.MAX_VALUE) ); }// </editor-fold> // Variables declaration - do not modify // End of variables declaration /** * Gets default instance. Do not use directly: reserved for *.settings files only, * i.e. deserialization routines; otherwise you could get a non-deserialized instance. * To obtain the singleton instance, use {@link #findInstance}. */ public static synchronized TaskEditorTopComponent getDefault() { if (instance == null) { instance = new TaskEditorTopComponent(); } return instance; } /** * Obtain the TaskEditorTopComponent instance. Never call { @link #getDefault} directly! */ public static synchronized TaskEditorTopComponent findInstance() { TopComponent win = WindowManager.getDefault().findTopComponent (PREFERRED_ID); if (win == null) { Logger.getLogger(TaskEditorTopComponent.class.getName()). warning("Cannot find " + PREFERRED_ID + " component. It will not be located properly in the window system."); return getDefault(); } if (win instanceof TaskEditorTopComponent) { return (TaskEditorTopComponent) win; } Logger.getLogger(TaskEditorTopComponent.class.getName()). warning("There seem to be multiple components with the '" + PREFERRED_ID + "' ID. That is a potential source of errors and unexpected behavior."); return getDefault(); } @Override public int getPersistenceType() { return TopComponent.PERSISTENCE_ALWAYS; } @Override public void componentOpened() { // TODO add custom code on component opening } @Override public void componentClosed() { // TODO add custom code on component closing } void writeProperties(java.util.Properties p) { // better to version settings since initial version as advocated at // http://wiki.apidesign.org/wiki/PropertyFiles p.setProperty("version", "1.0"); // TODO store your settings } Object readProperties(java.util.Properties p) { if (instance == null) { instance = this; } instance.readPropertiesImpl(p); return instance; } private void readPropertiesImpl(java.util.Properties p) { String version = p.getProperty("version"); // TODO read your settings according to their version } @Override protected String preferredID() { return PREFERRED_ID; } } As expected, the class TaskEditorTopComponent extends the TopComponent class. Let's look at it more closely: For efficient resource usage, the generated TopComponent is implemented as a singleton. A private constructor prohibits its incorrect usage from outside by disallowing direct instantiation of the class. The static attribute instance holds the only instance in existence. The static method getDefault creates and returns this instance if necessary on demand. Typically, getDefault should never be called directly. Instead of this, you should use findInstance, which delegates to getDefault if necessary. findInstance tries to retrieve the instance using the Window Manager and the ID of the TopComponent before falling back to the singleton instance. This ensures the correct usage of persistent information. The constructor creates the component tree for the TaskEditorTopComponent by calling the method init Components(). This method contains only code generated via the NetBeans "Matisse" Form Builder and is read-only in the NetBeans Java editor. You can change the code in this method using the Form Builder's Property Sheet, as will be shown later. The static property PreferredID holds the TopComponent ID used for identification of the TopComponent. As indicated by its name, the ID can be changed by the Window System, if name clashes occur. The ID is used throughout all the configuration files. The methods componentOpened() and componentClosed() are part of the lifecycle of the TopComponent. You learn about the method getPersistenceType() later, in the section about the persistence of TopComponents. What does the Java code do and not do? The Java code only defines the visual aspects of the TaskEditorTopComponent and manages the singleton instance of this component. In no way does the code describe how and where the instance is shown. That's the task of the two XML files, described below. Two small XML files are created by the wizard. The first is the TopComponent's settings file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE settings PUBLIC "-//NetBeans//DTD Session settings 1.0//EN" "http://www.netbeans.org/dtds/sessionsettings-1_0.dtd"> <settings version="1.0"> <module name="com.netbeansrcp.taskeditor" spec="1.0"/> <instanceof class="org.openide.windows.TopComponent"/> <instanceof class="com.netbeansrcp.taskeditor. TaskEditorTopComponent"/> <instance class="com.netbeansrcp.taskeditor.TaskEditorTopComponent" method="getDefault"/> </settings> The settings file describes the persistent instance of the TopComponent. As you can see, the preceding configuration describes that the TopComponent belongs to the module TaskEditor in the specification version "1.0" and that it is an instance of the types TopComponent and TaskEditorTopComponent. Also described is that the instance that is created is done so using the method call TaskEditorTopComponent.getDefault(). <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE tc-ref PUBLIC "-//NetBeans//DTD Top Component in Mode Properties 2.0//EN" "http://www.netbeans.org/dtds/tc-ref2_0.dtd"> <tc-ref version="2.0" > <module name="com.netbeansrcp.taskeditor" spec="1.0"/> <tc-id id="TaskEditorTopComponent"/> <state opened="true"/> </tc-ref> The WSTCREF (window system creation file) describes the position of the TopComponent within the main window. This becomes clearer with the following file. The other important information in the WSTCREF file is the opened state at application start. Typically, you do not have to change these two configuration files by hand. This is not true for the following file, the layer.xml, which you often need to change manually, to register new folders and files in the filesystem. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE filesystem PUBLIC "-//NetBeans//DTD Filesystem 1.2//EN" "http:// www.netbeans.org/dtds/filesystem-1_2.dtd"> <filesystem> <folder name="Actions"> <folder name="Window"> <file name="com-netbeansrcp-taskeditor.TaskEditorAction.instance"> <attr name="component" methodvalue="com.netbeansrcp.taskeditor. TaskEditorTopComponent.findInstance"/> <attr name="displayName" bundlevalue="com.netbeansrcp.taskeditor. Bundle#CTL_TaskEditorAction"/> <attr name="instanceCreate" methodvalue="org.openide.windows. TopComponent.openAction"/> </file> </folder> </folder> <folder name="Menu"> <folder name="Window"> <file name="TaskEditorAction.shadow"> <attr name="originalFile" stringvalue="Actions/Window/com netbeansrcp-taskeditor-TaskEditorAction.instance"/> </file> </folder> </folder> <folder name="Windows2"> <folder name="Components"> <file name="TaskEditorTopComponent.settings" url="TaskEditorTopComponentSettings.xml"/> </folder> <folder name="Modes"> <folder name="editor"> <file name="TaskEditorTopComponent.wstcref" url="TaskEditorTopComponentWstcref.xml"/> </folder> </folder> </folder> </filesystem> The layer.xml is integrated into the central registry (also known as the SystemFileSystem) using the via a registration entry in the module's manifest file. The SystemFileSystem is a virtual filesystem for user settings. Each module can supply a layer file for merging configuration data from the module into the SystemFileSystem. The Window System API and the Actions API reserve a number of folders in the central registry for holding its configuration data. These folders enable specific subfolders and files relating to window system registration to be added to the filesystem. Let's have a look at the folder Windows2. Windows2 contains a folder named Components, which contains a virtual file with the name of the TopComponent and the extension .settings. This .settings file redirects to the real settings file. It is used to make the configuration known to the Window System. In addition, the Windows2 folder contains a folder named Modes, which contains a folder named editor. Modes represent the possible positions at which TopComponents can be shown in the application. The editor folder contains a .wstcref file for our TopComponent, which refers to the real WSTCREF file. This registers the TopComponent in the mode editor, so it shows up where typically editor windows are opened, which is the central part of the main window. Next , take a look at the folder Actions. It contains a folder named Window which contains a file declaring the action opening the TaskEditorTopComponent. The name is typically following Java class naming conventions with dots replaced by dashes and ending in .instance. The declaration of the virtual file itself consists of three critical parts. The attribute component describes how to create the component (methodvalue declares which method to call). The attribute displayName describes the default action name as shown in the example, in menu items. A possible declaration is the bundle value which describes the bundle and key to use to retrieve the display name. The attribute instanceCreate uses a static method call to create a real action to use. The folder Menu describes the application main menu. The folder Window contains a .shadow file. The attribute originalFile uses the full path in the SystemFileSytem to delegate to the original action declaration. As described above, .shadow files are used as symbolic links to real-defined virtual files. This declaration adds the action to the real menu bar of the application. As a result, important parts of the Window System API are not called programmatically, but are simply used declaratively. Declarative aspects include configuration and the positioning of windows, as well as the construction of the menu. In addition, you discovered that the wizard for creating TopComponents always creates singleton views. If you would like to change that, you need to adapt the code created by the wizard. For the time being, it is sufficient to use the singleton approach, particularly as it is more resource-friendly.
Read more
  • 0
  • 0
  • 1970

article-image-haxe-2-dynamic-type-and-properties
Packt
28 Jul 2011
7 min read
Save for later

haXe 2: The Dynamic Type and Properties

Packt
28 Jul 2011
7 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language Freeing yourself from the typing system The goal of the Dynamic type is to allow one to free oneself from the typing system. In fact, when you define a variable as being a Dynamic type, this means that the compiler won't make any kind of type checking on this variable. Time for action – Assigning to Dynamic variables When you declare a variable as being Dynamic, you will be able to assign any value to it at compile time. So you can actually compile this code: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = "Hello"; dynamicVar = 123; dynamicVar = {name:"John", lastName : "Doe"}; dynamicVar = new Array<String>(); } } The compiler won't mind even though you are assigning values with different types to the same variable! Time for action – Assigning from Dynamic variables You can assign the content of any Dynamic variable to a variable of any type. Indeed, we generally say that the Dynamic type can be used in place of any type and that a variable of type Dynamic is indeed of any type. So, with that in mind, you can now see that you can write and compile this code: class DynamicTest { public static function main() { var dynamicVar : Dynamic; var year : Int; dynamicVar = "Hello"; year = dynamicVar; } } So, here, even though we will indeed assign a String to a variable typed as Int, the compiler won't complain. But you should keep in mind that this is only at compile time! If you abuse this possibility, you may get some strange behavior! Field access A Dynamic variable has an infinite number of fields all of Dynamic type. That means you can write the following: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = {}; dynamicVar.age = 12; //age is Dynamic dynamicVar.name = "Benjamin"; //name is Dynamic } } Note that whether this code will work or not at runtime is highly dependent on the runtime you're targeting. Functions in Dynamic variables It is also possible to store functions in Dynamic variables and to call them: class DynamicTest { public static function main() { var dynamicVar : Dynamic; dynamicVar = function (name : String) { trace("Hello" + name); }; dynamicVar(); var dynamicVar2 : Dynamic = {}; dynamicVar2.sayBye = function (name : String) { trace("Bye" + name ); }; dynamicVar2.sayBye(); } } As you can see, it is possible to assign functions to a Dynamic variable or even to one of its fields. It's then possible to call them as you would do with any function. Again, even though this code will compile, its success at running will depend on your target. Parameterized Dynamic class You can parameterize the Dynamic class to slightly modify its behavior. When parameterized, every field of a Dynamic variable will be of the given type. Let's see an example: class DynamicTest { public static function main() { var dynamicVar : Dynamic<String>; dynamicVar = {}; dynamicVar.name = "Benjamin"; //name is a String dynamicVar.age = 12; //Won't compile since age is a String } } In this example, dynamicVar.name and dynamicVar.age are of type String, therefore, this example will fail to compile on line 7 because we are trying to assign an Int to a String. Classes implementing Dynamic A class can implement a Dynamic, parameterized or not. Time for action – Implementing a non-parameterized Dynamic When one implements a non-parameterized Dynamic in a class, one will be able to access an infinite number of fields in an instance. All fields that are not declared in the class will be of type Dynamic. So, for example: class User implements Dynamic { public var name : String; public var age : Int; //... } //... var u = new User(); //u is of type User u.name = "Benjamin"; //String u.age = 22; //Int u.functionrole = "Author"; //Dynamic   What just happened?   As you can see, the functionrole field is not declared in the User class, so it is of type Dynamic. In fact, when you try to access a field that's not declared in the class, a function named resolve will be called and it will get the name of the property accessed. You can then return the value you want. This can be very useful to implement some magic things. Time for action – Implementing a parameterized Dynamic When implementing a parameterized Dynamic, you will get the same behavior as with a non-parameterized Dynamic except that the fields that are not declared in the class will be of the type given as a parameter. Let's take almost the same example but with a parameterized Dynamic: class User implements Dynamic<String> { public var name : String; public var age : Int; //... } //... var u = new User(); //u is of type User u.name = "Benjamin"; //String u.age = 22; //Int u.functionrole = "Author"; //String because of the type parameter   What just happened?   As you can see here, fields that are not declared in the class are of type String because we gave String as a type parameter. Using a resolve function when implementing Dynamic Now we are going to use what we've just learned. We are going to implement a Component class that will be instantiated from a configuration file. A component will have properties and metadata. Such properties and metadata are not pre-determined, which means that the properties' names and values will be read from the configuration file. Each line of the configuration file will hold the name of the property or metadata, its value, and a 0 if it's a property (or otherwise it will be a metadata). Each of these fields will be separated by a space. The last constraint is that we should be able to read the value of a property or metadata by using the dot-notation. Time for action – Writing our Component class As you may have guessed, we will begin with a very simple Component class — all it has to do at first is to have two Hashes: one for metadata, the other one for properties. class Component { public var properties : Hash<String>; public var metadata : Hash<String>; public function new() { properties = new Hash<String>(); metadata = new Hash<String>(); } } It is that simple at the moment. As you can see, we do not implement access via the dot-notation at the moment. We will do it later, but the class won't be very complicated even with the support for this notation. Time for action – Parsing the configuration file We are now going to parse our configuration file to create a new instance of the Component class. In order to do that, we are going to create a ComponentParser class. It will contain two functions: parseConfigurationFile to parse a configuration file and return an instance of Component. writeConfigurationFile that will take an instance of Component and write data to a file. Let's see how our class should look at the moment (this example will only work on neko): class ComponentParser { /** * This function takes a path to a configuration file and returns an instance of ComponentParser */ public static function parseConfigurationFile(path : String) { var stream = neko.io.File.read(path, false); //Open our file for reading in character mode var comp = new Component(); //Create a new instance of Component while(!stream.eof()) //While we're not at the end of the file { var str = stream.readLine(); //Read one line from file var fields = str.split(" "); //Split the string using space as delimiter if(fields[2] == "0") { comp.properties.set(fields[0], fields[1]); //Set the key<->value in the properties Hash } else { comp.metadata.set(fields[0], fields[1]); //Set the key<->value in the metadata Hash } } stream.close(); return comp; } } It's not that complicated, and you would actually use the same kind of method if you were going to use a XML file. Time for action – Testing our parser Before continuing any further, we should test our parser in order to be sure that it works as expected. To do this, we can use the following configuration file: nameMyComponent 1 textHelloWorld 0 If everything works as expected, we should have a name metadata with the value MyComponent and a property named text with the value HelloWorld. Let's write a simple test class: class ComponentImpl { public static function main(): Void { var comp = ComponentParser.parseConfigurationFile("conf.txt"); trace(comp.properties.get("text")); trace(comp.metadata.get("name")); } } Now, if everything went well, while running this program, you should get the following output: ComponentImpl.hx:6: HelloWorld ComponentImpl.hx:7: MyComponent
Read more
  • 0
  • 0
  • 1970

article-image-getting-grips-facebook-platform
Packt
21 Oct 2009
6 min read
Save for later

Getting to Grips with the Facebook Platform

Packt
21 Oct 2009
6 min read
The Purpose of the Facebook Platform As you develop your Facebook applications, you'll find that the Facebook Platform is essential—in fact you won't really be able to do anything without it. So what does it do? Well, before answering that, let's look at a typical web-based application. The Standard Web Application Model If you've ever designed and built a web application before, then you'd have done it in a fairly standard way. Your application and any associated data would have been placed on a web server, and then your application users will access it from their web browsers via the Internet: The Facebook model is slightly different. The Facebook Web Application Model As far as your application users are concerned, they will just access Facebook.com and your application, by using a web browser and the Internet. But, that's not where the application lives—it's actually on your own server: Once you've looked at the Facebook web application model and realized that your application actually resides on your own server, it becomes obvious what the purpose of the Facebook Platform is—to provide an interface between your application and itself. There is an important matter to be considered here. If the application actually resides on your server, and your application becomes very successful (according to Facebook there are currently 25 million active users), then will your server be able to able to cope with that number of hits? Don't be too alarmed. This doesn't mean that your server will be accessed every time someone looks at his or her profile. Facebook employs a cache to stop that happening: Of course, at this stage, you're probably more concerned with just getting the application working—so let's continue looking at the Platform, but just bear that point in mind. Different components of the Facebook platform There are three elements to the Facebook Platform: The Facebook API (Application Programming Interface) FBML—Facebook Markup Language FQL—Facebook Query Language We'll now spend some time with each of these elements, and you'll see how you can use them individually, and in conjunction to make powerful yet simple applications. The great thing is that if you haven't got your web server set up yet, don't worry, because Facebook supplies you with all of the tools that you would need in order to do a test run with each of the elements. The Facebook API If you've already done some programming, then you'll probably know what an API (or Application Programming Interface) is. It's a set of software libraries that enable you to work with an application (in this case, Facebook) without knowing anything about its internal workings. All you have to do is obtain the libraries, and start making use of them in your own application. Now, before you start downloading files, you can actually learn more about their functionality by making use of the Facebook API Test Console. The Facebook API Test Console If you want to make use of the Facebook Test Console, you'll first need to access the Facebook developers' section—you'll find a link to this at the bottom of every Facebook page: Alternatively, you can use the URL http://developers.facebook.com to go there directly in your browser. When you get to this page, you'll find a link to the Tools page: Or, again, you can go directly to http://developers.facebook.com/tools.php, where you'll find the API Test Console: You'll find that the API Test Console has a number of fields: User ID—A read-only field which (when you're logged on to Facebook) unsurprisingly displays your user ID number. Response Format—With this, you can select the type of response that you want, and this can be: XML JSON Facebook PHP Client Callback—If you are using XML or JSON, then you can encapsulate the response in a function. Method—The actual Facebook method that you want to test. Once you've logged in, you'll see that your User ID is displayed and that all the drop-downs are enabled: You will also notice that a new link, documentation, appears on the screen, which is very useful. All you have to do is to select a method from the drop-down list, and then click on documentation. Once you've done that you'll see: A description of the method The parameters used by the method An example return XML A description of the expected response. The FQL equivalent (we will discuss this later in the chapter.) Error codes For now, let's just change the Response Format to Facebook PHP Client, and then click on Call Method to see what happens: In this case, you can see that the method returns an array of user ids—each one is the ID of one of the friends of the currently logged in user (that is your list of friends because you're the person logged in). You could, of course, go on to use this array in PHP as part of your application, but don't worry about that at the moment. For the time being, we'll just concentrate on working with our prototyping in the test console. However, before we move on, it's worth noting that you can obtain an array of friends only for the currently logged in user. You can't obtain the list of friends for any other user. So, for example, you would not be able to use friends. get on id 286601116 or 705175505. In fact, you wouldn't be able to use friends. get for 614902533 (as shown in the example) because that's my ID and not yours. On the other hand, having obtained a list of valid IDs we can now do something more interesting with them. For example, we can use the users.getinfo method to obtain the first name and birthday for particular users: As you can see, a multidimensional array is returned to your PHP code (if you were actually using this in an application). Therefore, for example, if you were to load the array into a variable $birthdays, then $birthdays[0][birthday] would contain January 27, 1960. Of course, in the above example, the most important piece of information is the first birthday in the array—record that in your diary for future reference. And, if you're thinking that I'm old enough to be your father, well, in some cases this is actually true: Now that you've come to grips with the API Test console, we can turn our attention to FBML and the FBML Test Console.
Read more
  • 0
  • 0
  • 1954
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-so-what-nodejs
Packt
04 Sep 2013
2 min read
Save for later

So, what is Node.js?

Packt
04 Sep 2013
2 min read
(For more resources related to this topic, see here.) Node.js is an open source platform that allows you to build fast and scalable network applications using JavaScript. Node.js is built on top of V8, a modern JavaScript virtual machine that powers Google's Chrome web browser. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.Node.js can handle multiple concurrent network connections with little overhead, making it ideal for data-intensive, real-time applications. With Node.js, you can build many kinds of networked applications. For instance, you can use it to build a web application service, an HTTP proxy, a DNS server, an SMTP server, an IRC server, and basically any kind of process that is network intensive. You program Node.js using JavaScript, which is the language that powers the Web. JavaScript is a powerful language that, when mastered, makes writing networked, event-driven applications fun and easy. Node.js recognizes streams that are resistant to precarious network conditions and misbehaving clients. For instance, mobile clients are notoriously famous for having large latency network connections, which can put a big burden on servers by keeping around lots of connections and outstanding requests. By using streaming to handle data, you can use Node.js to control incoming and outgoing streams and enable your service to survive. Also, Node.js makes it easy for you to use third-party open source modules. By using Node Package Manager (NPM), you can easily install, manage, and use any of the several modules contained in a big and growing repository. NPM also allows you to manage the modules your application depends on in an isolated way, allowing different applications installed in the same machine to depend on different versions of the same module without originating a conflict, for instance. Given the way it's designed, NPM even allows different versions of the same module to coexist in the same application. Summary In this article, we learned that Node.js uses an event-driven, non-blocking I/O mode and can handle multiple concurrent network connections with little overhead. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Cross-browser-distributed testing [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 1950

article-image-ajax-basic-utilities
Packt
20 Dec 2011
8 min read
Save for later

Ajax: Basic Utilities

Packt
20 Dec 2011
8 min read
(For more resources on PHP Ajax, see here.) Validating a form using Ajax The main idea of Ajax is to get data from the server in real time without reloading the whole page. In this task we will build a simple form with validation using Ajax. Getting ready As a JavaScript library is used in this task, we will choose jQuery. We will download (if we haven't done it already) and include it in our page. We need to prepare some dummy PHP code to retrieve the validation results. In this example, let's name it inputValidation.php. We are just checking for the existence of a param variable. If this variable is introduced in the GET request, we confirm the validation and send an OK status back to the page: <?php $result = array(); if(isset($_GET["param"])){ $result["status"] = "OK"; $result["message"] = "Input is valid!"; } else { $result["status"] = "ERROR"; $result["message"] = "Input IS NOT valid!"; } echo json_encode($result); ?> How to do it... Let`s start with basic HTML structure. We will define a form with three input boxes and one text area. Of course, it is placed in : <body> <h1>Validating form using Ajax</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Title *</label> <input type="text" id="title" name="title" class="required" /> </div> <div class="fieldRow"> <label>Url</label> <input type="text" id="url" name="url" value="http://" /> </div> <div class="fieldRow"> <label>Labels</label> <input type="text" id="labels" name="labels" /> </div> <div class="fieldRow"> <label>Text *</label> <textarea id="textarea" class="required"></textarea> </div> <div class="fieldRow"> <input type="submit" id="formSubmitter" value="Submit" disabled= "disabled" /> </div> </form> </body> <style> For visual confirmation of the valid input, we will define CSS styles: label{ width:70px; float:left; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled], input[disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } failed { border: 1px solid red; } </style> Now, it is time to include jQuery and its functionality: <script src="js/jquery-1.4.4.js"></script> <script> var ajaxValidation = function(object){ var $this = $(object); var param= $this.attr('name'); var value = $this.val(); $.get("ajax/inputValidation.php", {'param':param, 'value':value }, function(data) { if(data.status=="OK") validateRequiredInputs(); else $this.addClass('failed'); },"json"); } var validateRequiredInputs = function (){ var numberOfMissingInputs = 0; $('.required').each(function(index){ var $item = $(this); var itemValue = $item.val(); if(itemValue.length) { $item.removeClass('failed'); } else { $item.addClass('failed'); numberOfMissingInputs++; } }); var $submitButton = $('#formSubmitter'); if(numberOfMissingInputs > 0){ $submitButton.attr("disabled", true); } else { $submitButton.removeAttr('disabled'); } } </script> We will also initialize the document ready function: <script> $(document).ready(function(){ var timerId = 0; $('.required').keyup(function() { clearTimeout (timerId); timerId = setTimeout(function(){ ajaxValidation($(this)); }, 200); }); }); </script> When everything is ready, our result is as follows: How it works... We created a simple form with three input boxes and one text area. Objects with class required are automatically validated after the keyup event and calling the ajaxValidation function. Our keyup functionality also includes theTimeoutfunction to prevent unnecessary calls if the user is still writing. The validation is based on two steps: Validation of the actual input box: We are passing the inserted text to the ajax/inputValidation.php via Ajax. If the response from the server is not OK we will mark this input box as failed. If the response is OK, we proceed to the second step. Checking the other required fields in our form. When there is no failed input box left in the form, we will enable the submit button. There's more... Validation in this example is really basic. We were just checking if the response status from the server is OK. We will probably never meet a validation of the required field like we have here. In this case,it's better to use the length property directly on the client side instead of bothering the server with a lot of requests,simply to check if the required field is empty or filled. This task was just a demonstration of the basic Validationmethod. It would be nice to extend it with regular expressions on the server-side to directly check whether the URL form or the title already exist in our database, and let the user know what the problem is and how he/she can fix it. Creating an autosuggest control This recipe will show us how to create an autosuggest control. This functionality is very useful when we need to search within huge amounts of data. The basic functionality is to display the list of suggested data based on text in the input box. Getting ready We can start with the dummy PHP page which will serve as a data source. When we call this script with GET method and variable string, this script will return the list of records (names) which include the selected string: <?php $string = $_GET["string"]; $arr = array( "Adam", "Eva", "Milan", "Rajesh", "Roshan", // ... "Michael", "Romeo" ); function filter($var){ global $string; if(!empty($string)) return strstr($var,$string); } $filteredArray = array_filter($arr, "filter"); $result = ""; foreach ($filteredArray as $key => $value){ $row = "<li>".str_replace($string, "<strong>".$string."</strong>", $value)."</li>"; $result .= $row; } echo $result; ?> How to do it... As always, we will start with HTML. We will define the form with one input box and an unsorted list datalistPlaceHolder: <h1>Dynamic Dropdown</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Skype name:</label> <div class="ajaxDropdownPlaceHolder"> <input type="text" id="name" name="name" class="ajaxDropdown" autocomplete="OFF" /> <ul class="datalistPlaceHolder"></ul> </div> </div> </form> When the HTML is ready, we will play with CSS: <style> label { width:80px; float:left; padding:4px; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } validationFailed { border: 1px solid red; } validationPassed { border: 1px solid green; } .datalistPlaceHolder { width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; display:none; } ul.datalistPlaceHolder li { list-style: none; cursor:pointer; padding:4px; } ul.datalistPlaceHolder li:hover { color:#FFF; background-color:#000; } </style>   Now the real fun begins. We will include jQuery library and define our keyup events: <script src="js/jquery-1.4.4.js"></script> <script> var timerId; var ajaxDropdownInit = function(){ $('.ajaxDropdown').keyup(function() { var string = $(this).val(); clearTimeout (timerId); timerId = setTimeout(function(){ $.get("ajax/dropDownList.php", {'string':string}, function(data) { if(data) $('.datalistPlaceHolder').show().html(data); else $('.datalistPlaceHolder').hide(); }); }, 500 ); }); } </script> When everything is set, we will call the ajaxDropdownInit function within the document ready function: <script>$(document).ready(function(){ajaxDropdownInit();});</script>  Our autosuggest control is ready. The following screenshot shows the output: How it works... The autosuggest control in this recipe is based on the input box and the list of items in datalistPlaceHolder. After each keyup event of the input box,datalistPlaceHolder will load the list of items from ajax/dropDownList.php via the Ajax function defined in ajaxDropdownInit. A good feature of this recipe is thetimerID variable that,when used with thesetTimeout method, will allow us to send the request on the server only when we stop typing (in our case it is 500 milliseconds). It may not look so important, but it will save a lot of resources. We do not want to wait for the response of "M" typed in the input box, when we have already typed in "Milan". Instead of 5 requests (150 milliseconds each), we have just one. Multiply it, for example, with 10,000 users per day and the effect is huge. There's more... We always need to remember that the response from the server is in the JSON format. [{ 'id':'1', 'contactName':'Milan' },...,{ 'id':'99', 'contactName':'Milan (office)' }] Using JSON objects in JavaScript is not always useful from the performance point of view. Let's imagine we have 5000 contacts in one JSON file. It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: [{ "status": "100", "responseMessage": "Everything is ok! :)", "data": "<li><h2><ahref="#1">Milan</h2></li> <li><h2><ahref="#2">Milan2</h2></li> <li><h2><ahref="#3">Milan3</h2></li>" }] It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: <?php echo "STEP 1"; // Same for 2 and 3 ?> In this case, we will have the complete data in HTML and there is no need to create any logic to create a simple list of items.
Read more
  • 0
  • 0
  • 1950

article-image-modx-20-web-development-basics
Packt
24 Feb 2011
7 min read
Save for later

MODx 2.0: Web Development Basics

Packt
24 Feb 2011
7 min read
  MODx Web Development - Second Edition Site configuration When you first log in to the MODx Manager interface, you will see the site configuration page in the rightmost panel. Here, you can customize some basic configurations of the site. You can reach this page from anywhere in the MODx Manager by clicking on the Configuration sub-menu in the Tools menu. All of the options that can be configured from this Configuration page are settings that are global to the entire site. After changing the configurations, you have to let MODx store them by clicking on the Save button. The following is the screenshot of the Configuration page: (Move the mouse over the image to enlarge it) The configurations are grouped into five categories: Site—mostly, settings that are used to personalize the site Friendly URLs—settings to help make the site search-engine optimized User—settings related to user logins Interface & Features—mostly, Manager interface customizations File Manager—settings defining what can be uploaded and where Configuring the site In this section, we are going to make a few changes to get you familiar with the various configurations available. Most configurations have tooltips that describe them in a little pop-up when you move the mouse over them. After making changes in the site configuration and saving it, you will be redirected to another page. This page is available by clicking on the Home link on the Site tab. This page is also the default Manager interface page. This means that every time you log in using the Manager login screen, you will reach this page by default. This page has seven tabs, which are briefly explained below: My MODx Site: Provides quick access to certain features in MODx. Configuration: Displays information on the current status of the site. MODx News: Shows updates on what is happening with MODx. Security Notices: Shows updates on what is happening with MODx that is specific to security. Recent Resources: Shows a list with hyperlinks of the recently created or edited resources. Info: Shows information about your login status. Online: Lists all of the active users. Noticing and fixing errors and warnings The Configuration tab of the default Manager interface page displays errors and warnings about issues in the installation, if any. Generally, it also has instructions on how to fix them. Most of the time, the warnings are for security issues or suggestions for improving performance. Hence, although the site will continue to work when there are warnings listed on this page, it is good practice to fix the issues that have caused these warnings. Here we discuss three such warnings that occur commonly, and also show how to fix them. config file still writable: This is shown when the config file is still writable. It can be fixed by changing the properties of the configuration file to read only. register_globals is set to ON in your php.ini configuration file: This is a setting in the PHP configuration file. This should be set to OFF. Having it ON makes the site more vulnerable to what is known as cross site scripting (XSS). Configuration warning—GD and/or Zip PHP extensions not found: This is shown when you do not have the specified packages installed with PHP. MAMP doesn't come with the ZIP extension and you can ignore this configuration if you are not using it in production. Both XAMPP and MAMP come with the GD extension by default. Changing the name of the site In the previous section, we listed the groups of configuration options that are available. Let us change one option—the name of the site—now. Click on the Tools menu in the top navigational panel Click on the Configuration Menu item Change the text field labeled Site Name to Learning MODx Click on the Save button The basic element of MODx: Resources Resources are the basic building blocks in MODx. They are the elements that make up the content of the site. Every web page in MODx corresponds to a Resource page. In early versions of MODx, Resources were called Documents. Thinking of them as documents may make it easier for you to understand. Every resource has a unique ID. This ID can be passed along in the URL, and MODx will display the page for the resource with the same ID. In the simplest case, a resource contains plain text. As can be seen from the previous screenshot, the ID referred to here is 2, and the content displayed on the screen is from resource ID 9. It is also possible to refer to a resource by an alias name instead of an ID. An alias is a friendly name that can be used instead of having to use numbers. Containers Resources can be contained within other resources called containers. Containers in MODx are like folders in filesystems, but with the difference that a container is also a resource. This means that every container also has a resource ID, and a corresponding page is shown when such an ID is referenced in the URL. MODx Manager interface MODx is administered and customized by using the provided Manager interface. From the Manager interface, you can edit resources, place them within containers, and change their properties. You can log in to the Manager interface by using the Manager login screen http://sitename/manager, by using the username and password that you supplied when installing MODx. The Manager interface is divided into two panes. The leftmost pane always displays the resources in a resource tree, and the rightmost pane displays the content relevant to your last action. The two preceding panes you see are the menu and the corresponding menu items. Each of these leads to the different functionalities of MODx. In the leftmost pane, you will see the site name followed by a hierarchically-grouped resource list. There is a + near every unexpanded container that has other resources. When you click on the + symbol, the container expands to show the children and the + symbol changes to a – symbol. Clicking on the – symbol hides the children of the respective container. The resource's ID is displayed in parentheses after the resource's title in the resource tree. The top of leftmost pane consists of a few icons, referred to as the Resource Toolbar, which help to control the visibility of the resource tree. Expand Site Tree—expand all of the containers to show their children and siblings. Collapse Site Tree—collapse all of the containers to hide their children and siblings. New Resource—open a new resource page in the rightmost pane. New Weblink—open a new weblink page in the rightmost pane. Refresh Site Tree—refresh the tree of containers and resources to make available any changes that are not yet reflected in the tree. Sort the Site Tree—open a pop-up page where you can select from the various criteria available to sort the tree. Purge—when you delete a resource, it stays in the recycle bin. The resources are struck out with a red line. The resources can be completely removed from the system by clicking on the Purge icon. Hide Site Tree—this icon slides the leftmost pane out of view, giving more space for the rightmost pane. Right-clicking on a resource brings up a context menu from where you can perform various actions on the resource. Clicking on Edit will open the page for editing in the rightmost pane. The context menu provides interesting shortcuts that are very handy.
Read more
  • 0
  • 0
  • 1942

article-image-professional-plone-4-development-developing-site-strategy
Packt
26 Aug 2011
9 min read
Save for later

Professional Plone 4 Development: Developing a Site Strategy

Packt
26 Aug 2011
9 min read
  Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.         Read more about this book       (For more resources on Plone, see here.) Creating a policy package Our policy package is just a package that can be installed as a Plone add-on. We will use a GenericSetup extension profile in this package to turn a standard Plone installation into one that is configured to our client's needs. We could have used a full-site GenericSetup base profile instead, but by using a GenericSetup extension profile we can avoid replicating the majority of the configuration that is done by Plone. We will use ZopeSkel to create an initial skeleton for the package, which we will call optilux.policy, adopting the optilux.* namespace for all Optilux-specific packages. In your own code, you should of course use a different namespace. It is usually a good idea to base this on the owning organization's name, as we have done here. Note that package names should be all lowercase, without spaces, underscores, or other special characters. If you intend to release your code into the Plone Collective, you can use the collective.* namespace, although other namespaces are allowed too. The plone.* namespace is reserved for packages in the core Plone repository, where the copyright has been transferred to the Plone Foundation. You should normally not use this without first coordinating with the Plone Framework Team. We go into the src/ directory of the buildout and run the following command: $ ../bin/zopeskel plone optilux.policy This uses the plone ZopeSkel template to create a new package called optilux.policy. This will ask us a few questions. We will stick with "easy" mode for now, and answer True when asked whether to register a GenericSetup profile. Note that ZopeSkel will download some packages used by its local command support. This may mean the initial bin/zopeskel command takes a little while to complete, and assumes that we are currently connected to the internet. A local command is a feature of PasteScript, upon which ZopeSkel is built. ZopeSkel registers an addcontent command, which can be used to insert additional snippets of code, such as view registrations or new content types, into the initial skeleton generated by ZopeSkel. We will not use this feature, preferring instead to retain full control over the code we write and avoid the potential pitfalls of code generation. If you wish to use this feature, you will either need to install ZopeSkel and PasteScript into the global Python environment, or add PasteScript to the ${zopeskel:eggs} option in buildout.cfg, so that you get access to the bin/paster command. Run bin/zopeskel --help from the buildout root directory for more information about ZopeSkel and its options. Distribution details Let us now take a closer look at what ZopeSkel has generated for us. We will also consider which files should be added to version control, and which files should be ignored. Item Version control Purpose setup.py Yes Contains instructions for how Setuptools/Distribute (and thus Buildout) should manage the package's distribution. We will make a few modifications to this file later. optilux.policy.egg-info/ Yes Contains additional distribution configuration. In this case, ZopeSkel keeps track of which template was used to generate the initial skeleton using this file. *.egg No ZopeSkel downloads a few eggs that are used for its local command support (Paste, PasteScript, and PasteDeploy) into the distribution directory root. If you do not intend to use the local command support, you can delete these. You should not add these to version control. README.txt Yes If you intend to release your package to the public, you should document it here. PyPI requires that this file be present in the root of a distribution. It is also read into the long_description variable in setup.py. PyPI will attempt to render this as reStructuredText markup (see http://docutils.sourceforge.net/rst.html). docs/ Yes Contains additional documentation, including the software license (which should be the GNU General Public License, version 2, for any packages that import directly from any of Plone's GPL-licensed packages) and a change log. Changes to setup.py Before we can progress, we will make a few modifications to setup.py. Our revised file looks similar to the following code, with changes highlighted: from setuptools import setup, find_packagesimport osversion = '2.0'setup(name='optilux.policy', version=version, description="Policy package for the Optilux Cinemas project", long_description=open("README.txt").read() + "n" + open(os.path.join("docs", "HISTORY.txt")).read(), # Get more strings from # http://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ "Framework :: Plone", "Programming Language :: Python", ], keywords='', author='Martin Aspeli', author_email='optilude@gmail.com', url='http://optilux-cinemas.com', license='GPL', packages=find_packages(exclude=['ez_setup']), namespace_packages=['optilux'], include_package_data=True, zip_safe=False, install_requires=[ 'setuptools', 'Plone', ], extras_require={ 'test': ['plone.app.testing',] }, entry_points=""" # -*- Entry points: -*- [z3c.autoinclude.plugin] target = plone """,# setup_requires=["PasteScript"],# paster_plugins=["ZopeSkel"], ) The changes are as follows: We have added an author name, e-mail address, and updated project URL. These are used as metadata if the distribution is ever uploaded to PyPI. For internal projects, they are less important. We have declared an explicit dependency on the Plone distribution, that is, on Plone itself. This ensures that when our package is installed, so is Plone. We will shortly update our main working set to contain only the optilux. policy distribution. This dependency ensures that Plone is installed as part of our application policy. We have then added a [tests] extra, which adds a dependency on plone. app.testing. We will install this extra as part of the following test working set, making plone.app.testing available in the test runner (but not in the Zope runtime). Finally, we have commented out the setup_requires and paster_plugins options. These are used to support ZopeSkel local commands, which we have decided not to use. The main reason to comment them out is to avoid having Buildout download these additional dependencies into the distribution root directory, saving time, and reducing the number of files in the build. Also note that, unlike distributions downloaded by Buildout in general, there is no "offline" support for these options. Changes to configure.zcml We will also make a minor change to the generated configure.zcml file, removing the line: <five:registerPackage package="." initialize=".initialize" /> This directive is used to register the package as an old-style Zope 2 product. The main reason to do this is to ensure that the initialize() function is called on Zope startup. This may be a useful hook, but most of the time it is superfluous, and requires additional test setup that can make tests more brittle. We can also remove the (empty) initialize() function itself from the optilux/policy/__init__.py file, effectively leaving the file blank. Do not delete __init__.py, however, as it is needed to make this directory into a Python package. Updating the buildout Before we can use our new distribution, we need to add it to our development buildout. We will consider two scenarios: The distribution is under version control in a repository module separate to the development buildout itself. This is the recommended approach. The distribution is not under version control, or is kept inside the version control module of the buildout itself. The example source code that comes with this article is distributed as a simple archive, so it uses this approach. Given the approach we have taken to separating out our buildout configuration into multiple files, we must first update packages.cfg to add the new package. Under the [sources] section, we could add: [sources]optilux.policy = svn https://some-svn-server/optilux.policy/trunk Or, for distributions without a separate version control URL: [sources]optilux.policy = fs optilux.policy We must also update the main and test working sets in the same file: [eggs]main = optilux.policytest = optilux.policy [test] Finally, we must tell Buildout to automatically add this distribution as a develop egg when running the development buildout. This is done near the top of buildout.cfg: auto-checkout = optilux.policy We must rerun buildout to let the changes take effect: $ bin/buildout We can test that the package is now available for import using the zopepy interpreter: $ bin/zopepy>>> from optilux import policy>>> The absence of an ImportError tells us that this package will now be known to the Zope instance in the buildout. To be absolutely sure, you can also open the bin/instance script in a text editor (bin/instance-script.py on Windows) and look for a line in the sys.path mangling referencing the package. Working sets and component configuration It is worth deliberating a little more on how Plone and our new policy package are loaded and configured. At build time: Buildout installs the [instance] part, which will generate the bin/instance script. The plone.recipe.zope2instance recipe calculates a working set from its eggs option, which in our buildout references ${eggs:main}. This contains exactly one distribution: optilux.policy. This in turn depends on the Plone distribution which in turn causes Buildout to install all of Plone. Here, we have made a policy decision to depend on a "big" Plone distribution that includes some optional add-ons. We could also have depended on the smaller Products.CMFPlone distribution (which works for Plone 4.0.2 onwards), which includes only the core of Plone, perhaps adding specific dependencies for add-ons we are interested in. When declaring actual dependencies used by distributions that contain reusable code instead of just policy, you should always depend on the packages you import from or otherwise depend on, and no more. That is, if you import from Products.CMFPlone, you should depend on this, and not on the Plone meta-egg (which itself contains no code, but only declares dependencies on other distributions, including Products. CMFPlone). To learn more about the rationale behind the Products. CMFPlone distribution, see http://dev.plone.org/plone/ticket/10877. At runtime: The bin/instance script starts Zope. Zope loads the site.zcml file (parts/instance/etc/site.zcml) as part of its startup process. This automatically includes the ZCML configuration for packages in the Products.* namespace, including Products.CMFPlone, Plone's main package. Plone uses z3c.autoinclude to automatically load the ZCML configuration of packages that opt into this using the z3c.autoinclude.plugin entry point target = plone. The optilux.policy distribution contains such an entry point, so it will be configured, along with any packages or files it explicitly includes from its own configure.zcml file.
Read more
  • 0
  • 0
  • 1907
article-image-cherrypy-photoblog-application
Packt
22 Oct 2009
6 min read
Save for later

CherryPy : A Photoblog Application

Packt
22 Oct 2009
6 min read
A photoblog is like a regular blog except that the principal content is not text but photographs. The main reason for choosing a photoblog is that the range of features to be implemented is small enough so that we can concentrate on their design and implementation. The goals behind going through this application are as follows: To see how to slice the development of a web application into meaningful layers and therefore show that a web application is not very different from a rich application sitting on your desktop. To show that the separation of concerns can also be applied to the web interface itself by using principles grouped under the name of Ajax. To introduce common Python packages for dealing with common aspects of web development such as database access, HTML templating, JavaScript handling, etc. Photoblog Entities As mentioned earlier, the photoblog will try to stay as simple as possible in order to focus on the other aspects of developing a web application. In this section, we will briefly describe the entities our photoblog will manipulate as well as their attributes and relations with each other. In a nutshell our photoblog application will use the following entities and they will be associated as shown in the following figure: This figure is not what our application will look like but it shows the entities our application will manipulate. One photoblog will contain several albums, which in turn will host as many films as required, which will carry the photographs. In other words, we will design our application with the following entity structure: Entity: Photoblog Role: This entity will be the root of the application. Attributes: name: A unique identifier for the blog title: A public label for the blog Relations: One photoblog will have zero to many albums Entity: Album Role: An album carries a story told by the photographs as an envelope. Attributes: name: A unique identifier for the album title: A public label for the album author: The name of the album's author description: A simple description of the album used in feeds story: A story attached to the album created: A timestamp of when the album is being created modified: A timestamp of when the album is being modified blog_id: A reference to the blog handling the album Relations: One album will reference zero to several films Entity: Film Role: A film gathers a set of photographs. Attributes: name: A unique identifier for the film title: A public label for the film created: A timestamp of when the film is being created modified: A timestamp of when the film is being modified album_id: A reference to the album Relations: A film will reference zero to several photographs Entity: Photo Role: The unit of our application is a photograph. Attributes: name: A unique identifier for the photo legend: A legend associated with the photograph filename: The base name of the photograph on the hard-disk filesize: The size in bytes of the photograph width: Width of the photograph in pixels height: Height of the photograph in pixels created: A timestamp of when the photograph is being created modified: A timestamp of when the photograph is being modified film_id: A reference to the film carrying the photograph Relations: None Functionally, the photoblog application will provide APIs to manipulate those entities via the traditional CRUD interface: Create, Retrieve, Update, and Delete. Vocabulary Here is a list of the terms we will be using: Persistence: Persistence is the concept of data items outliving the execution of programs manipulating them. Simply put, it is the process of storing data in long lasting memory medium such as a disk. Database: A database is a collection of organized data. There are different organization models: hierarchical, network, relational, object-oriented, etc. A database holds the logical representation of its data. Database Management System (DBMS): A DBMS is a group of related software applications to manipulate data in a database. A DBMS platform should offer the following among other features: Persistence of the data A query language to manipulate data Concurrency control Security control Integrity control Transaction capabilities We will use DBMSes as the plural of DBMS. DBMSes Overview In this section, we will quickly review the different kinds of existing DBMSes. The goal is to quickly introduce their main characteristics. Relational Database Management System (RDBMS) Of all DBMSes, the RDBMS is the most common, whether it is in small applications or multi-national infrastructure. An RDBMS comes with a database based on the concepts of the relational model, a mathematical model that permits the logical representation of a collection of data through relations. A relational database should be a concrete implementation of the relational model. However, modern relational databases follow the model only to a certain degree. The following table shows the correlation between the terms of the relational model and the relational database implementation. Relational databases support a set of types to define the domain of scope a column can use. However, there are only a limited number of supported types, which can be an issue with complex data types as allowed in objected-oriented design. Structure Query Language more commonly known as SQL is the language used to define, manipulate, or control data within a relational database. The following table is a quick summary of SQL keywords and their contexts. A construction of these keywords is called an SQL statement. When executed, an SQL statement returns a collection of rows of the data matching the query or nothing. The relational model algebra uses the relation composition to compose operations across different sets; this is translated in the relational database context by joins. Joining tables allows complex queries to be shaped to filter out data. SQL provides the following three kinds of joins:   Union Type Description INNER JOIN Intersection between two tables. LEFT OUTER JOIN Limits the result set by the left table. So all results from the left table will be returned with their matching result in the right table. If no matching result is found, it will return a NULL value. RIGHT OUTER JOIN Same as the LEFT OUTER JOIN except that the tables are reversed.  
Read more
  • 0
  • 0
  • 1901

article-image-building-next-generation-web-meteor
Packt
05 Feb 2015
9 min read
Save for later

Building the next generation Web with Meteor

Packt
05 Feb 2015
9 min read
This article by Fabian Vogelsteller, the author of Building Single-page Web Apps with Meteor, explores the full-stack framework of Meteor. Meteor is not just a JavaScript library such as jQuery or AngularJS. It's a full-stack solution that contains frontend libraries, a Node.js-based server, and a command-line tool. All this together lets us write large-scale web applications in JavaScript, on both the server and client, using a consistent API. (For more resources related to this topic, see here.) Even with Meteor being quite young, already a few companies such as https://lookback.io, https://respond.ly and https://madeye.io use Meteor already in their production environment. If you want to see for yourself what's made with Meteor, take a look at http://madewith.meteor.com. Meteor makes it easy for us to build web applications quickly and takes care of the boring processes such as file linking, minifying, and concatenating of files. Here are a few highlights of what is possible with Meteor: We can build complex web applications amazingly fast using templates that automatically update themselves when data changes We can push new code to all clients on the fly while they are using our app Meteor core packages come with a complete account solution, allowing a seamless integration with Facebook, Twitter, and more Data will automatically be synced across clients, keeping every client in the same state in almost real time Latency compensation will make our interface appear super fast while the server response happens in the background With Meteor, we never have to link files with the <script> tags in HTML. Meteor's command-line tool automatically collects JavaScript or CSS files in our application's folder and links them in the index.html file, which is served to clients on initial page load. This makes structuring our code in separate files as easy as creating them. Meteor's command-line tool also watches all files inside our application's folder for changes and rebuilds them on the fly when they change. Additionally, it starts a Meteor server that serves the app's files to the clients. When a file changes, Meteor reloads the site of every client while preserving its state. This is called a hot code reload. In production, the build process also concatenates and minifies our CSS and JavaScript files. By simply adding the less and coffee core packages, we can even write all styles in LESS and code in CoffeeScript with no extra effort. The command-line tool is also the tool for deploying and bundling our app so that we can run it on a remote server. Sounds awesome? Let's take a look at what's needed to use Meteor Adding basic packages Packages in Meteor are libraries that can be added to our projects. The nice thing about Meteor packages is that they are self-contained units, which run out of the box. They mostly add either some templating functionality or provide extra objects in the global namespace of our project. Packages can also add features to Meteor's build process like the stylus package, which lets us write our app's style files with the stylus pre-processor syntax. Writing templates in Meteor Normally when we build websites, we build the complete HTML on the server side. This was quite straightforward; every page is built on the server, then it is sent to the client, and at last JavaScript added some additional animation or dynamic behavior to it. This is not so in single-page apps, where each page needs to be already in the client's browser so that it can be shown at will. Meteor solves that problem by providing templates that exists in JavaScript and can be placed in the DOM at some point. These templates can have nested templates, allowing for and easy way to reuse and structure an app's HTML layout. Since Meteor is so flexible in terms of folder and file structure, any *.html page can contain a template and will be parsed during Meteor's build process. This allows us to put all templates in the my-meteor-blog/client/templates folder. This folder structure is chosen as it helps us organizing templates while our app grows. Meteor template engine is called Spacebars, which is a derivative of the handlebars template engine. Spacebars is built on top of Blaze, which is Meteor's reactive DOM update engine. Meteor and databases Meteor currently uses MongoDB by default to store data on the server, although there are drivers planned for relational databases, too. If you are adventurous, you can try one of the community-built SQL drivers, such as the numtel:mysql package from https://atmospherejs.com/numtel/mysql. MongoDB is a NoSQL database. This means it is based on a flat document structure instead of a relational table structure. Its document approach makes it ideal for JavaScript as documents are written in BJSON, which is very similar to the JSON format. Meteor has a database everywhere approach, which means we have the same API to query the database on the client as well as on the server. Yet, when we query the database on the client, we are only able to access data that we published to a client. MongoDB uses a datastructure called a collection, which is the equivalent of a table in an SQL database. Collections contain documents, where each document has its own unique ID. These documents are JSON-like structures and can contain properties with values, even with multiple dimensions: { "_id": "W7sBzpBbov48rR7jW", "myName": "My Document Name", "someProperty": 123456, "aNestedProperty": { "anotherOne": "With another string" } } These collections are used to store data in the servers MongoDB as well as the client-sides minimongo collections, which is an in-memory database mimicking the behavior of the real MongoDB. The MongoDB API let us use a simple JSON-based query language to get documents from a collection. We can pass additional options to only ask for specific fields or sort the returned documents. These are very powerful features, especially on the client side, to display data in various ways. Data everywhere In Meteor, we can use the browser console to update data, which means we update the database from the client. This works because Meteor automatically syncs these changes to the server and updates the database accordingly. This is happening because we have the autopublish and insecure core packages added to our project by default. The autopublish package publishes automatically all documents to every client, whereas the insecure package allows every client to update database records by its _id field. Obviously, this works well for prototyping but is infeasible for production, as every client could manipulate our database. If we remove the insecure package, we would need to add the "allow and deny" rules to determine what a client is allowed to update and what not; otherwise all updates will get denied. Differences between client and server collections Meteor has a database everywhere approach. This means it provides the same API on the client as on the server. The data flow is controlled using a publication subscription model. On the server sits the real MongoDB database, which stores data persistently. On the client Meteor has a package called minimongo, which is a pure in-memory database mimicking most of MongoDB's query and update functions. Every time a client connects to its Meteor server, Meteor downloads the documents the client subscribed to and stores them in its local minimongo database. From here, they can be displayed in a template or processed by functions. When the client updates a document, Meteor syncs it back to the server, where it is passed through any allow/deny functions before being persistently stored in the database. This works also in the other way, when a document in the server-side database changes, it will get automatically sync to every client that is subscribed to it, keeping every connected client up to date. Syncing data – the current Web versus the new Web In the current Web, most pages are either static files hosted on a server or dynamically generated by a server on a request. This is true for most server-side-rendered websites, for example, those written with PHP, Rails, or Django. Both of these techniques required no effort besides being displayed by the clients; therefore, they are called thin clients. In modern web applications, the idea of the browser has moved from thin clients to fat clients. This means most of the website's logic resides on the client and the client asks for the data it needs. Currently, this is mostly done via calls to an API server. This API server then returns data, commonly in JSON form, giving the client an easy way to handle it and use it appropriately. Most modern websites are a mixture of thin and fat clients. Normal pages are server-side-rendered, where only some functionality, such as a chat box or news feed, is updated using API calls. Meteor, however, is built on the idea that it's better to use the calculation power of all clients instead of one single server. A pure fat client or a single-page app contains the entire logic of a website's frontend, which is send down on the initial page load. The server then merely acts as a data source, sending only the data to the clients. This can happen by connecting to an API and utilizing AJAX calls, or as with Meteor, using a model called publication/subscription. In this model, the server offers a range of publications and each client decides which dataset it wants to subscribe to. Compared with AJAX calls, the developer doesn't have to take care of any downloading or uploading logic. The Meteor client syncs all of the data automatically in the background as soon as it subscribes to a specific dataset. When data on the server changes, the server sends the updated documents to the clients and vice versa, as shown in the following diagram: Summary Meteor comes with more great ways of building pure JavaScript applications such as simple routing and simple ways to make components, which can be packaged for others to use. Meteor's reactivity model, which allows you to rerun any function and template helpers at will, allows for great consistent interfaces and simple dependency tracking, which is a key for large-scale JavaScript applications. If you want to dig deeper, buy the book and read How to build your own blog as single-page web application in a simple step-by-step fashion by using Meteor, the next generation web! Resources for Article: Further resources on this subject: Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 1897

article-image-routing
Packt
16 Oct 2014
17 min read
Save for later

Routing

Packt
16 Oct 2014
17 min read
In this article by Mitchel Kelonye, author of Mastering Ember.js, we will learn URL-based state management in Ember.js, which constitutes routing. Routing enables us to translate different states in our applications into URLs and vice-versa. It is a key concept in Ember.js that enables developers to easily separate application logic. It also enables users to link back to content in the application via the usual HTTP URLs. (For more resources related to this topic, see here.) We all know that in traditional web development, every request is linked by a URL that enables the server make a decision on the incoming request. Typical actions include sending back a resource file or JSON payload, redirecting the request to a different resource, or sending back an error response such as in the case of unauthorized access. Ember.js strives to preserve these ideas in the browser environment by enabling association between these URLs and state of the application. The main component that manages these states is the application router. It is responsible for restoring an application to a state matching the given URL. It also enables the user to navigate between the application's history as expected. The router is automatically created on application initialization and can be referenced as MyApplicationNamespace.Router. Before we proceed, we will be using the bundled sample to better understand this extremely convenient component. The sample is a simple implementation of the Contacts OS X application as shown in the following screenshot: It enables users to add new contacts as well as edit and delete existing ones. For simplicity, we won't support avatars but that could be an implementation exercise for the reader. We already mentioned some of the states in which this application can transition into. These states have to be registered in the same way server-side frameworks have URL dispatchers that backend programmers use to map URL patters to views. The article sample already illustrates how these possible states are defined:  // app.jsvar App = Ember.Application.create();App.Router.map(function() {this.resource('contacts', function(){this.route('new');this.resource('contact', {path: '/:contact_id'}, function(){this.route('edit');});});this.route('about');}); Notice that the already instantiated router was referenced as App.Router. Calling its map method gives the application an opportunity to register its possible states. In addition, two other methods are used to classify these states into routes and resources. Mapping URLs to routes When defining routes and resources, we are essentially mapping URLs to possible states in our application. As shown in the first code snippet, the router's map function takes a function as its only argument. Inside this function, we may define a resource using the corresponding method, which takes the following signature: this.resource(resourceName, options, function); The first argument specifies the name of the resource and coincidentally, the path to match the request URL. The next argument is optional and holds configurations that we may need to specify as we shall see later. The last one is a function that is used to define the routes of that particular resource. For example, the first defined resource in the samples says, let the contacts resource handle any requests whose URL start with /contacts. It also specifies one route, new, that is used to handle creation of new contacts. Routes on the other hand accept the same arguments for the function argument. You must be asking yourself, "So how are routes different from resources?" The two are essentially the same, other than the former offers a way to categorize states (routes) that perform actions on a specific entity. We can think of an Ember.js application as tree, composed of a trunk (the router), branches (resources), and leaves (routes). For example, the contact state (a resource) caters for a specific contact. This resource can be displayed in two modes: read and write; hence, the index and edit routes respectively, as shown: this.resource('contact', {path: '/:contact_id'}, function(){this.route('index'); // auto definedthis.route('edit');}); Because Ember.js encourages convention, there are two components of routes and resources that are always autodefined: A default application resource: This is the master resource into which all other resources are defined. We therefore did not need to define it in the router. It's not mandatory to define resources on every state. For example, our about state is a route because it only needs to display static content to the user. It can however be thought to be a route of the already autodefined application resource. A default index route on every resource: Again, every resource has a default index route. It's autodefined because an application cannot settle on a resource state. The application therefore uses this route if no other route within this same resource was intended to be used. Nesting resources Resources can be nested depending on the architecture of the application. In our case, we need to load contacts in the sidebar before displaying any of them to the user. Therefore, we need to define the contact resource inside the contacts. On the other hand, in an application such as Twitter, it won't make sense to define a tweet resource embedded inside a tweets resource because an extra overhead will be incurred when a user just wants to view a single tweet linked from an external application. Understanding the state transition cycle A request is handled in the same way water travels from the roots (the application), up the trunk, and is eventually lost off leaves. This request we are referring to is a change in the browser location that can be triggered in a number of ways. Before we proceed into finer details about routes, let's discuss what happened when the application was first loaded. On boot, a few things happened as outlined: The application first transitioned into the application state, then the index state. Next, the application index route redirected the request to the contacts resource. Our application uses the browsers local storage to store the contacts and so for demoing purposes, the contacts resource populated this store with fixtures (located at fixtures.js). The application then transitioned into the corresponding contacts resource index route, contacts.index. Again, here we made a few decisions based on whether our store contained any data in it. Since we indeed have data, we redirected the application into the contact resource, passing the ID of the first contact along. Just as in the two preceding resources, the application transitioned from this last resource into the corresponding index route, contact.index. The following figure gives a good view of the preceding state change: Configuring the router The router can be customized in the following ways: Logging state transitions Specifying the root app URL Changing browser location lookup method During development, it may be necessary to track the states into which the application transitions into. Enabling these logs is as simple as: var App = Ember.Application.create({LOG_TRANSITIONS: true}); As illustrated, we enable the LOG_TRANSITIONS flag when creating the application. If an application is not served at the root of the website domain, then it may be necessary to specify the path name used as in the following example: App.Router.reopen({rootURL: '/contacts/'}); One other modification we may need to make revolves around the techniques Ember.js uses to subscribe to the browser's location changes. This makes it possible for the router to do its job of transitioning the app into the matched URL state. Two of these methods are as follows: Subscribing to the hashchange event Using the history.pushState API The default technique used is provided by the HashLocation class documented at http://emberjs.com/api/classes/Ember.HashLocation.html. This means that URL paths are usually prefixed with the hash symbol, for example, /#/contacts/1/edit. The other one is provided by the HistoryLocation class located at http://emberjs.com/api/classes/Ember.HistoryLocation.html. This does not distinguish URLs from the traditional ones and can be enabled as: App.Router.reopen({location: 'history'}); We can also opt to let Ember.js pick which method is best suited for our app with the following code: App.Router.reopen({location: 'auto'}); If we don't need any of these techniques, we could opt to do so especially when performing tests: App.Router.reopen({location: none}); Specifying a route's path We now know that when defining a route or resource, the resource name used also serves as the path the router uses to match request URLs. Sometimes, it may be necessary to specify a different path to use to match states. There are two common reasons that may lead us to do this, the first of which is good for delegating route handling to another route. Although, we have not yet covered route handlers, we already mentioned that our application transitions from the application index route into the contacts.index state. We may however specify that the contacts route handler should manage this path as: this.resource('contacts', {path: '/'}, function(){}); Therefore, to specify an alternative path for a route, simply pass the desired route in a hash as the second argument during resource definition. This also applies when defining routes. The second reason would be when a resource contains dynamic segments. For example, our contact resource handles contacts who should obviously have different URLs linking back to them. Ember.js uses URL pattern matching techniques used by other open source projects such as Ruby on Rails, Sinatra, and Express.js. Therefore, our contact resource should be defined as: this.resource('contact', {path: '/:contact_id'}, function(){}); In the preceding snippet, /:contact_id is the dynamic segment that will be replaced by the actual contact's ID. One thing to note is that nested resources prefix their paths with those of parent resources. Therefore, the contact resource's full path would be /contacts/:contact_id. It's also worth noting that the name of the dynamic segment is not mandated and so we could have named the dynamic segment as /:id. Defining route and resource handlers Now that we have defined all the possible states that our application can transition into, we need to define handlers to these states. From this point onwards, we will use the terms route and resource handlers interchangeably. A route handler performs the following major functions: Providing data (model) to be used by the current state Specifying the view and/or template to use to render the provided data to the user Redirecting an application away into another state Before we move into discussing these roles, we need to know that a route handler is defined from the Ember.Route class as: App.RouteHandlerNameRoute = Ember.Route.extend(); This class is used to define handlers for both resources and routes and therefore, the naming should not be a concern. Just as routes and resources are associated with paths and handlers, they are also associated with controllers, views, and templates using the Ember.js naming conventions. For example, when the application initializes, it enters into the application state and therefore, the following objects are sought: The application route The application controller The application view The application template In the spirit of do more with reduced boilerplate code, Ember.js autogenerates these objects unless explicitly defined in order to override the default implementations. As another example, if we examine our application, we notice that the contact.edit route has a corresponding App.ContactEditController controller and contact/edit template. We did not need to define its route handler or view. Having seen this example, when referring to routes, we normally separate the resource name from the route name by a period as in the following: resourceName.routeName In the case of templates, we may use a period or a forward slash: resourceName/routeName The other objects are usually camelized and suffixed by the class name: ResourcenameRoutenameClassname For example, the following table shows all the objects used. As mentioned earlier, some are autogenerated. Route Name Controller Route Handler View Template  applicationApplicationControllerApplicationRoute  ApplicationViewapplication        ApplicationViewapplication  IndexViewindex       about AboutController  AboutRoute  AboutView about  contactsContactsControllerContactsRoute  ContactsView  contacts      contacts.indexContactsIndexControllerContactsIndexRoute  ContactsIndexViewcontacts/index        ContactsIndexViewcontacts/index  ContactsNewRoute  ContactsNewViewcontacts/new      contact  ContactController  ContactRoute  ContactView contact  contact.index  ContactIndexController  ContactIndexRoute  ContactIndexView contact/index contact.edit  ContactEditController  ContactEditRoute  ContactEditView contact/index One thing to note is that objects associated with the intermediary application state do not need to carry the suffix; hence, just index or about. Specifying a route's model We mentioned that route handlers provide controllers, the data needed to be displayed by templates. These handlers have a model hook that can be used to provide this data in the following format: AppNamespace.RouteHandlerName = Ember.Route.extend({model: function(){}}) For instance, the route contacts handler in the sample loads any saved contacts from local storage as: model: function(){return App.Contact.find();} We have abstracted this logic into our App.Contact model. Notice how we reopen the class in order to define this static method. A static method can only be called by the class of that method and not its instances: App.Contact.reopenClass({find: function(id){return (!!id)? App.Contact.findOne(id): App.Contact.findAll();},…}) If no arguments are passed to the method, it goes ahead and calls the findAll method, which uses the local storage helper to retrieve the contacts: findAll: function(){var contacts = store('contacts') || [];return contacts.map(function(contact){return App.Contact.create(contact);});} Because we want to deal with contact objects, we iteratively convert the contents of the loaded contact list. If we examine the corresponding template, contacts, we notice that we were able to populate the sidebar as shown in the following code: <ul class="nav nav-pills nav-stacked">{{#each model}}<li>{{#link-to "contact.index" this}}{{name}}{{/link-to}}</li>{{/each}}</ul> Do not worry about the template syntax at this point if you're new to Ember.js. The important thing to note is that the model was accessed via the model variable. Of course, before that, we check to see if the model has any content in: {{#if model.length}}...{{else}}<h1>Create contact</h1>{{/if}} As we shall see later, if the list was empty, the application would be forced to transition into the contacts.new state, in order for the user to add the first contact as shown in the following screenshot: The contact handler is a different case. Remember we mentioned that its path has a dynamic segment that would be passed to the handler. This information is passed to the model hook in an options hash as: App.ContactRoute = Ember.Route.extend({model: function(params){return App.Contact.find(params.contact_id);},...}); Notice that we are able to access the contact's ID via the contact_id attribute of the hash. This time, the find method calls the findOne static method of the contact's class, which performs a search for the contact matching the provided ID, as shown in the following code: findOne: function(id){var contacts = store('contacts') || [];var contact = contacts.find(function(contact){return contact.id == id;});if (!contact) return;return App.Contact.create(contact);} Serializing resources We've mentioned that Ember.js supports content to be linked back externally. Internally, Ember.js simplifies creating these links in templates. In our sample application, when the user selects a contact, the application transitions into the contact.index state, passing his/her ID along. This is possible through the use of the link-to handlebars expression: {{#link-to "contact.index" this}}{{name}}{{/link-to}} The important thing to note is that this expression enables us to construct a link that points to the said resource by passing the resource name and the affected model. The destination resource or route handler is responsible for yielding this path constituting serialization. To serialize a resource, we need to override the matching serialize hook as in the contact handler case shown in the following code: App.ContactRoute = Ember.Route.extend({...serialize: function(model, params){var data = {}data[params[0]] = Ember.get(model, 'id');return data;}}); Serialization means that the hook is supposed to return the values of all the specified segments. It receives two arguments, the first of which is the affected resource and the second is an array of all the specified segments during the resource definition. In our case, we only had one and so we returned the required hash that resembled the following code: {contact_id: 1} If we, for example, defined a resource with multiple segments like the following code: this.resource('book',{path: '/name/:name/:publish_year'},function(){}); The serialization hook would need to return something close to: {name: 'jon+doe',publish_year: '1990'} Asynchronous routing In actual apps, we would often need to load the model data in an asynchronous fashion. There are various approaches that can be used to deliver this kind of data. The most robust way to load asynchronous data is through use of promises. Promises are objects whose unknown value can be set at a later point in time. It is very easy to create promises in Ember.js. For example, if our contacts were located in a remote resource, we could use jQuery to load them as: App.ContactsRoute = Ember.Route.extend({model: function(params){return Ember.$.getJSON('/contacts');}}); jQuery's HTTP utilities also return promises that Ember.js can consume. As a by the way, jQuery can also be referenced as Ember.$ in an Ember.js application. In the preceding snippet, once data is loaded, Ember.js would set it as the model of the resource. However, one thing is missing. We require that the loaded data be converted to the defined contact model as shown in the following little modification: App.ContactsRoute = Ember.Route.extend({model: function(params){var promise = Ember.Object.createWithMixins(Ember.DeferredMixin);Ember.$.getJSON('/contacts').then(reject, resolve);function resolve(contacts){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function reject(res){var err = new Error(res.responseText);promise.reject(err);}return promise;}}); We first create the promise, kick off the XHR request, and then return the promise while the request is still being processed. Ember.js will resume routing once this promise is rejected or resolved. The XHR call also creates a promise; so, we need to attach to it, the then method which essentially says, invoke the passed resolve or reject function on successful or failed load respectively. The resolve function converts the loaded data and resolves the promise passing the data along thereby resumes routing. If the promise was rejected, the transition fails with an error. We will see how to handle this error in a moment. Note that there are two other flavors we can use to create promises in Ember.js as shown in the following examples: var promise = Ember.Deferred.create();Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function fail(res){var err = new Error(res.responseText);promise.reject(err);}return promise; The second example is as follows: return new Ember.RSVP.Promise(function(resolve, reject){Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});resolve(contacts)}function fail(res){var err = new Error(res.responseText);reject(err);}}); Summary This article detailed how a browser's location-based state management is accomplished in Ember.js apps. Also, we accomplished how to create a router, define resources and routes, define a route's model, and perform a redirect. Resources for Article: Further resources on this subject: AngularJS Project [Article] Automating performance analysis with YSlow and PhantomJS [Article] AngularJS [Article]
Read more
  • 0
  • 0
  • 1891
article-image-forms-grok-10
Packt
12 Feb 2010
13 min read
Save for later

Forms in Grok 1.0

Packt
12 Feb 2010
13 min read
A quick demonstration of automatic forms Let's start by showing how this works, before getting into the details. To do that, we'll add a project model to our application. A project can have any number of lists associated with it, so that related to-do lists can be grouped together. For now, let's consider the project model by itself. Add the following lines to the app.py file, just after the Todo application class definition. We'll worry later about how this fits into the application as a whole. class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', values=['personal','business']) description = schema.Text(title=u'Description')class AddProject(grok.Form): grok.context(Todo) form_fields = grok.AutoFields(IProject) We'll also need to add a couple of imports at the top of the file: from zope import interfacefrom zope import schema Save the file, restart the server, and go to the URL http://localhost:8080/todo/addproject. The result should be similar to the following screenshot: OK, where did the HTML for the form come from? We know that AddProject is some sort of a view, because we used the grok.context class annotation to set its context and name. Also, the name of the class, but in lowercase, was used in the URL, like in previous view examples. The important new thing is how the form fields were created and used. First, a class named IProject was defined. The interface defines the fields on the form, and the grok.AutoFields method assigns them to the Form view class. That's how the view knows which HTML form controls to generate when the form is rendered. We have three fields: name, description, and kind. Later in the code, the grok.AutoFields line takes this IProject class and turns these fields into form fields. That's it. There's no need for a template or a render method. The grok.Form view takes care of generating the HTML required to present the form, taking the information from the value of the form_fields attribute that the grok.AutoFields call generated. Interfaces The I in the class name stands for Interface. We imported the zope.interface package at the top of the file, and the Interface class that we have used as a base class for IProject comes from this package. Example of an interface An interface is an object that is used to specify and describe the external behavior of objects. In a sense, the interface is like a contract. A class is said to implement an interface when it includes all of the methods and attributes defined in an interface class. Let's see a simple example: from zope import interfaceclass ICaveman(interface.Interface): weapon = interface.Attribute('weapon') def hunt(animal): """Hunt an animal to get food""" def eat(animal): """Eat hunted animal""" def sleep() """Rest before getting up to hunt again""" Here, we are describing how cavemen behave. A caveman will have a weapon, and he can hunt, eat, and sleep. Notice that the weapon is an attribute—something that belongs to the object, whereas hunt, eat, and sleep are methods. Once the interface is defined, we can create classes that implement it. These classes are committed to include all of the attributes and methods of their interface class. Thus, if we say: class Caveman(object): interface.implements(ICaveman) Then we are promising that the Caveman class will implement the methods and attributes described in the ICaveman interface: weapon = 'ax'def hunt(animal): find(animal) hit(animal,self.weapon)def eat(animal): cut(animal) bite()def sleep(): snore() rest() Note that though our example class implements all of the interface methods, there is no enforcement of any kind made by the Python interpreter. We could define a class that does not include any of the methods or attributes defined, and it would still work. Interfaces in Grok In Grok, a model can implement an interface by using the grok.implements method. For example, if we decided to add a project model, it could implement the IProject interface as follows: class Project(grok.Container): grok.implements(IProject) Due to their descriptive nature, interfaces can be used for documentation. They can also be used for enabling component architectures, but we'll see about that later on. What is of more interest to us right now is that they can be used for generating forms automatically. Schemas The way to define the form fields is to use the zope.schema package. This package includes many kinds of field definitions that can be used to populate a form. Basically, a schema permits detailed descriptions of class attributes that are using fields. In terms of a form—which is what is of interest to us here—a schema represents the data that will be passed to the server when the user submits the form. Each field in the form corresponds to a field in the schema. Let's take a closer look at the schema we defined in the last section: class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', required=False, values=['personal','business']) description = schema.Text(title=u'Description', required=False) The schema that we are defining for IProject has three fields. There are several kinds of fields, which are listed in the following table. In our example, we have defined a name field, which will be a required field, and will have the label Name beside it. We also have a kind field, which is a list of options from which the user must pick one. Note that the default value for required is True, but it's usually best to specify it explicitly, to avoid confusion. You can see how the list of possible values is passed statically by using the values parameter. Finally, description is a text field, which means it will have multiple lines of text. Available schema attributes and field types In addition to title, values, and required, each schema field can have a number of properties, as detailed in the following table: Attribute Description title A short summary or label. description A description of the field. required Indicates whether a field requires a value to exist. readonly If True, the field's value cannot be changed. default The field's default value may be None, or a valid field value. missing_value If input for this field is missing, and that's OK, then this is the value to use. order The order attribute can be used to determine the order in which fields in a schema are defined. If one field is created after another (in the same thread), its order will be greater. In addition to the field attributes described in the preceding table, some field types provide additional attributes. In the previous example, we saw that there are various field types, such as Text, TextLine, and Choice. There are several other field types available, as shown in the following table. We can create very sophisticated forms just by defining a schema in this way, and letting Grok generate them. Field type Description Parameters Bool Boolean field.   Bytes Field containing a byte string (such as the python str). The value might be constrained to be within length limits.   ASCII Field containing a 7-bit ASCII string. No characters > DEL (chr(127)) are allowed. The value might be constrained to be within length limits.   BytesLine Field containing a byte string without new lines.   ASCIILine Field containing a 7-bit ASCII string without new lines.   Text Field containing a Unicode string.   SourceText Field for the source text of an object.   TextLine Field containing a Unicode string without new lines.   Password Field containing a Unicode string without new lines, which is set as the password.   Int Field containing an Integer value.   Float Field containing a Float.   Decimal Field containing a Decimal.   DateTime Field containing a DateTime.   Date Field containing a date.   Timedelta Field containing a timedelta.   Time Field containing time.   URI A field containing an absolute URI.   Id A field containing a unique identifier. A unique identifier is either an absolute URI or a dotted name. If it's a dotted name, it should have a module or package name as a prefix.   Choice Field whose value is contained in a predefined set. values: A list of text choices for the field. vocabulary: A Vocabulary object that will dynamically produce the choices. source: A different, newer way to produce dynamic choices. Note: only one of the three should be provided. More information about sources and vocabularies is provided later in this book. Tuple Field containing a value that implements the API of a conventional Python tuple. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. List Field containing a value that implements the API of a conventional Python list. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. Set Field containing a value that implements the API of a conventional Python standard library sets.Set or a Python 2.4+ set. value_type: Field value items must conform to the given type, expressed via a field. FrozenSet Field containing a value that implements the API of a conventional Python2.4+ frozenset. value_type: Field value items must conform to the given type, expressed via a field. Object Field containing an object value. Schema: The interface that defines the fields comprising the object. Dict Field containing a conventional dictionary. The key_type and value_type fields allow specification of restrictions for keys and values contained in the dictionary. key_type: Field keys must conform to the given type, expressed via a field. value_type: Field value items must conform to the given type, expressed via a field. Form fields and widgets Schema fields are perfect for defining data structures, but when dealing with forms sometimes they are not enough. In fact, once you generate a form using a schema as a base, Grok turns the schema fields into form fields. A form field is like a schema field but has an extended set of methods and attributes. It also has a default associated widget that is responsible for the appearance of the field inside the form. Rendering forms requires more than the fields and their types. A form field needs to have a user interface, and that is what a widget provides. A Choice field, for example, could be rendered as a <select> box on the form, but it could also use a collection of checkboxes, or perhaps radio buttons. Sometimes, a field may not need to be displayed on a form, or a writable field may need to be displayed as text instead of allowing users to set the field's value. Form components Grok offers four different components that automatically generate forms. We have already worked with the first one of these, grok.Form. The other three are specializations of this one: grok.AddForm is used to add new model instances. grok.EditForm is used for editing an already existing instance. grok.DisplayForm simply displays the values of the fields. A Grok form is itself a specialization of a grok.View, which means that it gets the same methods as those that are available to a view. It also means that a model does not actually need a view assignment if it already has a form. In fact, simple applications can get away by using a form as a view for their objects. Of course, there are times when a more complex view template is needed, or even when fields from multiple forms need to be shown in the same view. Grok can handle these cases as well, which we will see later on. Adding a project container at the root of the site To get to know Grok's form components, let's properly integrate our project model into our to-do list application. We'll have to restructure the code a little bit, as currently the to-do list container is the root object of the application. We need to have a project container as the root object, and then add a to-do list container to it. To begin, let's modify the top of app.py, immediately before the TodoList class definition, to look like this: import grokfrom zope import interface, schemaclass Todo(grok.Application, grok.Container): def __init__(self): super(Todo, self).__init__() self.title = 'To-Do list manager' self.next_id = 0 def deleteProject(self,project): del self[project] First, we import zope.interface and zope.schema. Notice how we keep the Todo class as the root application class, but now it can contain projects instead of lists. We also omitted the addProject method, because the grok.AddForm instance is going to take care of that. Other than that, the Todo class is almost the same. class IProject(interface.Interface): title = schema.TextLine(title=u'Title',required=True) kind = schema.Choice(title=u'Kind of project',values=['personal', 'business']) description = schema.Text(title=u'Description',required=False) next_id = schema.Int(title=u'Next id',default=0) We then have the interface definition for IProject, where we add the title, kind, description, and next_id fields. These were the fields that we previously added during the call to the __init__ method at the time of product initialization. class Project(grok.Container): grok.implements(IProject) def addList(self,title,description): id = str(self.next_id) self.next_id = self.next_id+1 self[id] = TodoList(title,description) def deleteList(self,list): del self[list] The key thing to notice in the Project class definition is that we use the grok.implements class declaration to see that this class will implement the schema that we have just defined. class AddProjectForm(grok.AddForm): grok.context(Todo) grok.name('index') form_fields = grok.AutoFields(Project) label = "To begin, add a new project" @grok.action('Add project') def add(self,**data): project = Project() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) The actual form view is defined after that, by using grok.AddForm as a base class. We assign this view to the main Todo container by using the grok.context annotation. The name index is used for now, so that the default page for the application will be the 'add form' itself. Next, we create the form fields by calling the grok.AutoFields method. Notice that this time the argument to this method call is the Project class directly, rather than the interface. This is possible because the Project class was associated with the correct interface when we previously used grok.implements. After we have assigned the fields, we set the label attribute of the form to the text: To begin, add a new project. This is the title that will be shown on the form. In addition to this new code, all occurrences of grok.context(Todo) in the rest of the file need to be changed to grok.context(Project), as the to-do lists and their views will now belong to a project and not to the main Todo application. For details, take a look at the source code of this article for Grok 1.0 Web Development>>Chapter 5.
Read more
  • 0
  • 0
  • 1881

article-image-learning-jquery
Packt
27 Sep 2011
9 min read
Save for later

Learning jQuery

Packt
27 Sep 2011
9 min read
  (For more resources on jQuery, see here.) Custom events The events that are triggered naturally by the DOM implementations of browsers are crucial to any interactive web application. However, we are not limited to this set of events in our jQuery code. We can freely add our own custom events to the repertoire. Custom events must be triggered manually by our code. In a sense, they are like regular functions that we define, in that we can cause a block of code to be executed when we invoke it from another place in the script. The .bind() call corresponds to a function definition and the .trigger() call to a function invocation. However, event handlers are decoupled from the code that triggers them. This means that we can trigger events at any time, without knowing in advance what will happen when we do. We might cause a single bound event handler to execute, as with a regular function. We also might cause multiple handlers to run or even none at all. In order to illustrate this, we can revise our Ajax loading feature to use a custom event. We will trigger a nextPage event whenever the user requests more photos and bind handlers that watch for this event and perform the work previously done by the .click() handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage'); return false; }); }); The .click() handler now does very little work itself. After triggering the custom event, it prevents the default behavior by returning false. The heavy lifting is transferred to the new event handlers for the nextPage event as follows: (function($) { $(document).bind('nextPage', function() { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { $('#gallery').append(data); }); } }); var pageNum = 1; $(document).bind('nextPage', function() { pageNum++; if (pageNum < 20) { $('#more-photos') .attr('href', 'pages/' + pageNum + '.html'); } else { $('#more-photos').remove(); } }); })(jQuery); The largest difference is that we have split what was once a single function into two. This is simply to illustrate that a single event trigger can cause multiple bound handlers to fire. The other point to note is that we are illustrating another application of event bubbling here. Our nextPage handlers could be bound to the link that triggers the event, but we would need to wait to do this until the DOM was ready. Instead, we are binding the handlers to the document itself, which is available immediately, so we can do the binding outside of $(document).ready(). The event bubbles up and, so long as another handler does not stop the event propagation, our handlers will be fired. Infinite scrolling Just as multiple event handlers can react to the same triggered event, the same event can be triggered in multiple ways. We can demonstrate this by adding an infinite scrolling feature to our page. This popular technique lets the user's scroll bar manage the loading of content, fetching additional content whenever the user reaches the end of what has been loaded thus far. We will begin with a simple implementation, and then improve it in successive examples. The basic idea is to observe the scroll event, measure the current scroll bar position when scrolling occurs, and load the new content if needed, as follows: (function($) { var $window = $(window); function checkScrollPosition() { var distance = $window.scrollTop() + $window.height(); if ($('#container').height() <= distance) { $(document).trigger('nextPage'); } } $(document).ready(function() { $window.scroll(checkScrollPosition).scroll(); }); })(jQuery); The new checkScrollPosition() function is set as a handler for the window's scroll event. This function computes the distance from the top of the document to the bottom of the window, and then compares this distance to the total height of the main container in the document. As soon as these reach equality, we need to fill the page with additional photos, so we trigger the nextPage event. As soon as we bind the scroll handler, we immediately trigger it with a call to .scroll(). This kick-starts the process, so that if the page is not initially filled with photos, an Ajax request is made right away. Custom event parameters When we define functions, we can set up any number of parameters to be filled with argument values when we actually call the function. Similarly, when triggering a custom event, we may want to pass along additional information to any registered event handlers. We can accomplish this by using custom event parameters. The first parameter defined for any event handler, as we have seen, is the DOM event object as enhanced and extended by jQuery. Any additional parameters we define are available for our discretionary use. To see this action, we will add a new option to the nextPage event allowing us to scroll the page down to display the newly added content as follows: (function($) { $(document).bind('nextPage', function(event, scrollToVisible) { var url = $('#more-photos').attr('href'); if (url) { $.get(url, function(data) { var $data = $(data).appendTo('#gallery'); if (scrollToVisible) { var newTop = $data.offset().top; $(window).scrollTop(newTop); } checkScrollPosition(); }); } } ); }); We have now added a scrollToVisible parameter to the event callback. The value of this parameter determines whether we perform the new functionality, which entails measuring the position of the new content and scrolling to it. Measurement is easy using the .offset() method, which returns the top and left coordinates of the new content. In order to move down the page, we call the .scrollTop() method. Now we need to pass an argument into the new parameter. All that is required is providing an extra value when invoking the event using .trigger(). When newPage is triggered through scrolling, we don't want the new behavior to occur, as the user is already manipulating the scroll position directly. When the More Photos link is clicked, on the other hand, we want the newly added photos to be displayed on the screen, so we will pass a value of true to the handler as follows: $(document).ready(function() { $('#more-photos').click(function() { $(this).trigger('nextPage', [true]); return false; }); $window.scroll(checkScrollPosition).scroll(); }); In the call to .trigger(), we are now providing an array of values to pass to event handlers. In this case, the value of true will be given to the scrollToVisible parameter of the event handler. Note that custom event parameters are optional on both sides of the transaction. We have two calls to .trigger() in our code, only one of which provides argument values; when the other is called, this does not result in an error, but rather the value of null is passed to each parameter. Similarly, the lack of a scrollToVisible parameter in one of our .bind('nextPage') calls is not an error; if a parameter does not exist when an argument is passed, that argument is simply ignored. Throttling events A major issue with the infinite scrolling feature as we have implemented it is its performance impact. While our code is brief, the checkScrollPosition() function does need to do some work to measure the dimensions of the page and window. This effort can accumulate rapidly, because in some browsers the scroll event is triggered repeatedly during the scrolling of the window. The result of this combination could be choppy or sluggish performance. Several native events have the potential for frequent triggering. Common culprits include scroll, resize, and mousemove. To account for this, we need to limit our expensive calculations, so that they only occur after some of the event instances, rather than each one. This technique is known as event throttling. $(document).ready(function() { var timer = 0; $window.scroll(function() { if (!timer) { timer = setTimeout(function() { checkScrollPosition(); timer = 0; }, 250); } }).scroll(); }); Rather than setting checkScrollPosition() directly as the scroll event handler, we are using the JavaScript setTimeout function to defer the call by 250 milliseconds. More importantly, we are checking for a currently running timer first before performing any work. As checking the value of a simple variable is extremely fast, most of the calls to our event handler will return almost immediately. The checkScrollPosition() call will only happen when a timer completes, which will at most be every 250 milliseconds. We can easily adjust the setTimeout() value to a comfortable number that strikes a reasonable compromise between instant feedback and low performance impact. Our script is now a good web citizen. Other ways to perform throttling The throttling technique we have implemented is efficient and simple, but it is not the only solution. Depending on the performance characteristics of the action being throttled and typical interaction with the page, we may for instance want to institute a single timer for the page rather than create one when an event begins: $(document).ready(function() { var scrolled = false; $window.scroll(function() { scrolled = true; }); setInterval(function() { if (scrolled) { checkScrollPosition(); scrolled = false; } }, 250); checkScrollPosition(); }); Unlike our previous throttling code, this polling solution uses a single setInterval() call to begin checking the state of the scrolled variable every 250 milliseconds. Any time a scroll event occurs, scrolled is set to true, ensuring that the next time the interval passes, checkScrollPosition() will be called. A third solution for limiting the amount of processing performed during frequently repeated events is debouncing. This technique, named after the post-processing required handling repeated signals sent by electrical switches, ensures that only a single, final event is acted upon even when many have occurred. Deferred objects In jQuery 1.5, a concept known as a deferred object was introduced to the library. A deferred object encapsulates an operation that takes some time to complete. These objects allow us to easily handle situations in which we want to act when a process completes, but we don't necessarily know how long the process will take or even if it will be successful. A new deferred object can be created at any time by calling the $.Deferred() constructor. Once we have such an object, we can perform long-lasting operations and then call the .resolve() or .reject() methods on the object to indicate the operation was successful or unsuccessful. It is somewhat unusual to do this manually, however. Typically, rather than creating our own deferred objects by hand, jQuery or its plugins will create the object and take care of resolving or rejecting it. We just need to learn how to use the object that is created. Creating deferred objects is a very advanced topic. Rather than detailing how the $.Deferred() constructor operates, we will focus here on how jQuery effects take advantage of deferred objects.  
Read more
  • 0
  • 0
  • 1874
Modal Close icon
Modal Close icon