Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-yui-2x-using-event-component
Packt
14 Dec 2010
7 min read
Save for later

YUI 2.X: Using Event Component

Packt
14 Dec 2010
7 min read
Yahoo! User Interface Library 2.x Cookbook Over 70 simple incredibly effective recipes for taking control of Yahoo! User Interface Library like a Pro Easily develop feature-rich internet applications to interact with the user using various built-in components of YUI library Simple and powerful recipes explaining how to use and implement YUI 2.x components Gain a thorough understanding of the YUI tools Plenty of example code to help you improve your coding and productivity with the YUI Library Hands-on solutions that take a practical approach to recipes In this article, you will learn how to use YUI to handle JavaScript events, what special events YUI has to improve the functionality of some JavaScript events, and how to write custom events for your own application. Using YUI to attach JavaScript event listeners When attaching events in JavaScript most browsers use the addEventListener function , but the developers of IE use a function called attachEvent. Legacy browsers do not support either function, but instead require developers to attach functions directly to element objects using the 'on' + eventName property (for example myElement.onclick=function(){...}). Additionally, the execution context of event callback functions varies depending on how the event listener is attached. The Event component normalizes all the cross-browser issues, fixes the execution context of the callback function, and provides additional event improvements. This recipe will show how to attach JavaScript event listeners, using YUI. How to do it... Attach a click event to an element: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.addListener(myElement, 'click', fnCallback); Attach a click event to an element by its ID: var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.addListener('myElementId','click',fnCallback) Attach a click event to several elements at once: var ids = ["myElementId1", "myElementId2", "myElementId3"]; var fnCallback = function(e) { var targ = YAHOO.util.Event.getTarget(e); alert(targ.id + " was clicked"); }; YAHOO.util.Event.addListener(ids, 'click', fnCallback); When attaching event listeners, you can provide an object as the optional fourth argument, to be passed through as the second argument to the callback function: var myElem = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e, obj) { alert(obj); }; var obj = "I was passed through."; YAHOO.util.Event.addListener(myElem,'click',fnCallback,obj); When attaching event listeners, you can change the execution context of the callback function to the fourth argument, by passing true as the optional fifth argument: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert('My execution context was changed.'); }; var ctx = { /* some object to be the execution context of callback */ }; YAHOO.util.Event.addListener( myElement, 'click', fnCallback, ctx, true); How it works... The addListener function wraps the native event handling functions, normalizing the cross- browser differences. When attaching events, YUI calls the correct browser specific function, or defaults to legacy event handlers. Before executing the callback function, the Event component must (in some browsers) find the event object and adjust the execution context of the callback function. The callback function is normalized by wrapping it in a closure function that executes when the browser event fires, thereby allowing YUI to correct the event, before actually executing the callback function. In legacy browsers, which can only have one callback function per event type, YUI attaches a callback function that iterates through the listeners attached by the addListener function There's more... The addListener function returns true if the event listener is attached successfully and false otherwise. If the element to listen on is not available when the addListener function is called, the function will poll the DOM and wait to attach the listener when the element becomes available. Additionally, the Event component also keeps a list of all events that it has attached. This list is maintained to simplify removing events listeners, and so that all event listeners can be removed when the end-user leaves the page. Find all events attached to an element: var listeners = YAHOO.util.Event.getListeners('myElementId'); for (var i=0,j=listeners.length; i<j; i+=1) { var listener = listeners[i]; alert(listener.type); // event type alert(listener.fn); // callback function alert(listener.obj); // second argument of callback alert(listener.adjust); // execution context } Find all events of a certain type attached to an element: // only click listeners var listeners = YAHOO.util.Event.getListeners('myElementId', 'click'); The garbage collector in JavaScript does not always do a good job cleaning up event handlers. When removing nodes from the DOM, remember to remove events you may have added as well. More on YAHOO.util.Event.addListener The addListener function has been aliased by the shorter on function: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.on(myElement, 'click', fnCallback); By passing an object in as the optional fifth argument of addListener, instead of a Boolean, you can change the execution context of the callback to that object, while still passing in an another object as the optional fourth argument: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e, obj) { // this executes in the context of 'ctx' alert(obj); }; var obj = "I was passed through."; var ctx = { /* some object to be the execution context of callback */ }; YAHOO.util.Event.addListener( myElement,'click',fnCallback,obj, ctx); Lastly, there is an optional Boolean value that can be provided as the sixth argument of addListener, which causes the callback to execute in the event capture phase, instead of the event bubbling phase. You probably won't ever need to set this value to true, but if you want to learn more about JavaScript event phases see: http://www.quirksmode.org/js/events_order.html Event normalization functions The event object, provided as the first argument of the callback function, contains a variety of values that you may need to use (such as the target element, character code, etc.). YUI provides a collection of static functions that normalizes the cross-browser variations of these values. Before trying to use these properties, you should read this recipe, as it walks you through each of those functions. How to do it... Fetch the normalized target element of an event: var fnCallback = function(e) { var targetElement = YAHOO.util.Event.getTarget(e); alert(targetElement.id); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the character code of a key event (also known as the key code): var fnCallback = function(e) { var charCode = YAHOO.util.Event.getCharCode(e); alert(charCode); }; YAHOO.util.Event.on('myElementId', 'keypress', fnCallback); Fetch the x and y coordinates of a mouse event: var fnCallback = function(e) { var x = YAHOO.util.Event.getPageX(e); var y = YAHOO.util.Event.getPageY(e); alert("x-position=" + x + " and x-position= " + y); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch both the x and y coordinates of a mouse event, using: var fnCallback = function(e) { var point = YAHOO.util.Event.getXY(e); alert("x-position="+point[0]+" and x-position= "+point[1]); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the normalized related target element of an event: var fnCallback = function(e) { var targetElement = YAHOO.util.Event.getRelatedTarget(e); alert(targetElement.id); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the normalized time of an event: var fnCallback = function(e) { var time = YAHOO.util.Event.getTime(e); alert(time); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Stop the default behavior, propagation (bubbling) of an event, or both: var fnCallback = function(e) { // prevents the event from bubbling up to ancestors YAHOO.util.Event.stopPropagation(e); // prevents the event's default YAHOO.util.Event.preventDefault(e); // prevents the event's default behavior and bubbling YAHOO.util.Event.stopEvent(e); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); How it works... All of these functions test to see if a value exists on the event for each cross-browser variation of a property. The functions then normalize those values and return them. The stopPropogation and preventDefault functions actually modify the equivalent cross-browser property of the event, and delegate the behavior to the browsers.
Read more
  • 0
  • 0
  • 3243

article-image-handling-invalid-survey-submissions-django
Packt
20 Apr 2010
5 min read
Save for later

Handling Invalid Survey Submissions with Django

Packt
20 Apr 2010
5 min read
What would make a survey submission invalid? The only likely error case for our QuestionVoteForm is if no answer is chosen. What happens, then, if we attempt to submit a survey with missing answers? If we try it, we see that the result is not ideal: There are at least two problems here. First, the placement of the error messages, above the survey questions, is confusing. It is hard to know what the first error message on the page is referring to, and the second error looks like it is associated with the first question. It would be better to move the error messages closer to where the selection is actually made, such as between the question and answer choice list. Second, the text of the error message is not very good for this particular form. Technically the list of answer choices is a single form field, but to a general user the word field in reference to a list of choices sounds odd. We will correct both of these errors next. Coding custom error message and placement Changing the error message is easy, since Django provides a hook for this. To override the value of the error message issued when a required field is not supplied, we can specify the message we would like as the value for the required key in an error_messages dictionary we pass as an argument in the field declaration. Thus, this new definition for the answer field in QuestionVoteForm will change the error message to Please select an answer below: class QuestionVoteForm(forms.Form): answer = forms.ModelChoiceField(widget=forms.RadioSelect, queryset=None, empty_label=None, error_messages={'required': 'Please select an answer below:'}) Changing the placement of the error message requires changing the template. Instead of using the as_p convenience method, we will try displaying the label for the answer field, errors for the answer field, and then the answer field itself, which displays the choices. The {% for %} block that displays the survey forms in the survey/active_survey.html template then becomes: {% for qform in qforms %} {{ qform.answer.label }} {{ qform.answer.errors }} {{ qform.answer }}{% endfor %} How does that work? Better than before. If we try submitting invalid forms now, we see: While the error message itself is improved, and the placement is better, the exact form of the display is not ideal. By default, the errors are shown as an HTML unordered list. We could use CSS styling to remove the bullet that is appearing (as we will eventually do for the list of choices), but Django also provides an easy way to implement custom error display, so we could try that instead. To override the error message display, we can specify an alternate error_class attribute for QuestionVoteForm, and in that class, implement a __unicode__ method that returns the error messages with our desired formatting. An initial implementation of this change to QuestionVoteForm and the new class might be: class QuestionVoteForm(forms.Form): answer = forms.ModelChoiceField(widget=forms.RadioSelect, queryset=None, empty_label=None, error_messages={'required': 'Please select an answer below:'}) def __init__(self, question, *args, **kwargs): super(QuestionVoteForm, self).__init__(*args, **kwargs) self.fields['answer'].queryset = question.answer_set.all() self.fields['answer'].label = question.question self.error_class = PlainErrorListfrom django.forms.util import ErrorListclass PlainErrorList(ErrorList): def __unicode__(self): return u'%s' % ' '.join([e for e in sefl]) The only change to QuestionVoteForm is the addition of setting its error_class attribute to PlainErrorList in its __init__ method. The PlainErrorList class is based on the django.form.util.ErrorList class and simply overrides the __unicode__ method to return the errors as a string with no special HTML formatting. The implementation here makes use of the fact that the base ErrorList class inherits from list, so iterating over the instance itself returns the individual errors in turn. These are then joined together with spaces in between, and the whole string is returned. Note that we're only expecting there to ever be one error here, but just in case we are wrong in that assumption, it is safest to code for multiple errors existing. Although our assumption may never be wrong in this case, it's possible we might decide to re-use this custom error class in other situations where the single possible error expectation doesn't hold. If we code to our assumption and simply return the first error in the list, this may result in confusing error displays in some situations where there are multiple errors, since we will have prevented reporting all but the first error. If and when we get to that point, we may also find that formatting a list of errors with just spaces intervening is not a good presentation, but we can deal with that later. First, we'd like to simply verify that our customization of the error list display is used.
Read more
  • 0
  • 0
  • 3241

Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 3239

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 3235

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 3231

article-image-netbeans-platform-69-working-actions
Packt
10 Aug 2010
4 min read
Save for later

NetBeans Platform 6.9: Working with Actions

Packt
10 Aug 2010
4 min read
(For more resources on NetBeans, see here.) In Swing, an Action object provides an ActionListener for Action event handling, together with additional features, such as tool tips, icons, and the Action's activated state. One aim of Swing Actions is that they should be reusable, that is, can be invoked from a menu item as well as a related toolbar button and keyboard shortcut. The NetBeans Platform provides an Action framework enabling you to organize Actions declaratively. In many cases, you can simply reuse your existing Actions exactly as they were before you used the NetBeans Platform, once you have declared them. For more complex scenarios, you can make use of specific NetBeans Platform Action classes that offer the advantages of additional features, such as more complex displays in toolbars and support for context-sensitive help. Preparing to work with global actions Before you begin working with global Actions, let's make some changes to our application. It should be possible for the TaskEditorTopComponent to open for a specific task. You should therefore be able to pass a task into the TaskEditorTopComponent. Rather than the TaskEditorPanel creating a new task in its constructor, the task needs to be passed into it and made available to the TaskEditorTopComponent. On the other hand, it may make sense for a TaskEditorTopComponent to create a new task, rather than providing an existing task, which can then be made available for editing. Therefore, the TaskEditorTopComponent should provide two constructors. If a task is passed into the TaskEditorTopComponent, the TaskEditorTopComponent and the TaskEditorPanel are initialized. If no task is passed in, a new task is created and is made available for editing. Furthermore, it is currently only possible to edit a single task at a time. It would make sense to be able to work on several tasks at the same time in different editors. At the same time, you should make sure that the task is only opened once by the same editor. The TaskEditorTopComponent should therefore provide a method for creating new or finding existing editors. In addition, it would be useful if TaskEditorPanels were automatically closed for deleted tasks. Remove the logic for creating new tasks from the constructor of the TaskEditorPanel, along with the instance variable for storing the TaskManager, which is now redundant: public TaskEditorPanel() { initComponents(); this.pcs = new PropertyChangeSupport(this); } Introduce a new method to update a task: public void updateTask(Task task) { Task oldTask = this.task; this.task = task; this.pcs.firePropertyChange(PROP_TASK, oldTask, this.task); this.updateForm(); } Let us now turn to the TaskEditorTopComponent, which currently cannot be instantiated either with or without a task being provided. You now need to be able to pass a task for initializing the TaskEditorPanel. The new default constructor creates a new task with the support of a chained constructor, and passes this to the former constructor for the remaining initialization of the editor. In addition, it should now be able to return several instances of the TaskEditorTopComponent that are each responsible for a specific task. Hence, the class should be extended by a static method for creating new or finding existing instances. These instances are stored in a Map<Task, TaskEditorTopComponent> which is populated by the former constructor with newly created instances. The method checks whether the map for the given task already stores a responsible instance, and creates a new one if necessary. Additionally, this method registers a Listener on the TaskManager to close the relevant editor for deleting a task. As an instance is now responsible for a particular task this should be able to be queried, so we introduce another appropriate method. Consequently, the changes to the TaskEditorTopComponent looks as follows: private static Map<Task, TaskEditorTopComponent> tcByTask = new HashMap<Task, TaskEditorTopComponent>();public static TaskEditorTopComponent findInstance(Task task) { TaskEditorTopComponent tc = tcByTask.get(task); if (null == tc) { tc = new TaskEditorTopComponent(task); } if (null == taskMgr) { taskMgr = Lookup.getDefault().lookup(TaskManager.class); taskMgr.addPropertyChangeListener(newListenForRemovedNodes()); } return tc;}private class ListenForRemovedNodes implements PropertyChangeListener { public void propertyChange(PropertyChangeEvent arg0) { if (TaskManager.PROP_TASKLIST_REMOVE.equals (arg0.getPropertyName())) { Task task = (Task) arg0.getNewValue(); TaskEditorTopComponent tc = tcByTask.get(task); if (null != tc) { tc.close(); tcByTask.remove(task); } } }}private TaskEditorTopComponent() { this(Lookup.getDefault().lookup(TaskManager.class)); }private TaskEditorTopComponent(TaskManager taskMgr) { this((taskMgr != null) ? taskMgr.createTask() : null); }private TaskEditorTopComponent(Task task) { initComponents();// ... ((TaskEditorPanel) this.jPanel1).updateTask(task); this.ic.add(((TaskEditorPanel) this.jPanel1).task); this.associateLookup(new AbstractLookup(this.ic)); tcByTask.put(task, this); }public String getTaskId() { Task task = ((TaskEditorPanel) this.jPanel1).task; return (null != task) ? task.getId() : ""; } With that our preparations are complete and you can turn to the following discussion on Actions.
Read more
  • 0
  • 0
  • 3230
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-everything-package-concrete5
Packt
28 Mar 2011
10 min read
Save for later

Everything in a Package with concrete5

Packt
28 Mar 2011
10 min read
  concrete5 Beginner's Guide Create and customize your own website with the Concrete5 Beginner's Guide What's a package? Before we start creating our package, here are a few words about the functionality and purpose of packages: They can hold a single or several themes together You can include blocks which your theme needs You can check the requirements during the installation process in case your package depends on other blocks, configurations, and so on A package can be used to hook into events raised by concrete5 to execute custom code during different kind of actions You can create jobs, which run periodically to improve or check things in your website These are the most important things you can do with a package; some of it doesn't depend on packages, but is easier to handle if you use packages. It's up to you, but putting every extension in a package might even be useful if there's just a single element in it—why? You never have to worry where to extract the add-on. It always belongs in the packages directory An add-on wrapped in a package can be submitted to the concrete5 marketplace allowing you to earn money or make some people in the community happy by releasing your add-on for free Package structure We've already looked at different structures and you are probably already familiar with most of the directories in concrete5. Before we continue, here are a few words about the package structure, as it's essential that you understand its concept before we continue. A package is basically a complete concrete5 structure within one directory. All the directories are optional though. No need to create all of them, but you can create and use all of them within a single package. The directory concrete is a lot like a package as well; it's just located in its own directory and not within packages. Package controller Like the blocks we've created, the package has a controller as well. First of all, it is used to handle the installation process, but it's not limited to that. We can handle events and a few more things in the package controller; there's more about that later in this article. For now, we only need the controller to make sure the dashboard knows the package name and description. Time for action - creating the package controller Carry out the following steps: First, create a new directory named c5book in packages. Within that directory, create a file named controller.php and put the following content in it: <?php defined('C5_EXECUTE') or die(_("Access Denied.")); class c5bookPackage extends Package { protected $pkgHandle = 'c5book'; protected $appVersionRequired = '5.4.0'; protected $pkgVersion = '1.0'; public function getPackageDescription() { return t("Theme, Templates and Blocks from concrete5 for Beginner's"); } public function getPackageName() { return t("c5book"); } public function install() { $pkg = parent::install(); } } ?> You can create a file named icon.png 97 x 97 pixels with 4px rounded transparent corners. This is the official specification that you have to follow if you want to upload your add-on to the concrete5 marketplace. Once you've created the directory and the mandatory controller, you can go to your dashboard and click on Add Functionality. It looks a lot like a block but when you click on Install, the add-on is going to appear in the packages section. What just happened? The controller we created looks and works a lot like a block controller, which you should have seen and created already. However, let's go through all the elements of the package controller anyway, as it's important that you understand them: pkgHandle: A unique handle for your package. You'll need this when you access your package from code. appVersionRequired: The minimum version required to install the add-on. concrete5 will check that during the installation process. pkgVersion: The current version of the package. Make sure that you change the number when you release an update for a package; concrete5 has to know that it is installing an update and not a new version. getPackageDescription: Returns the description of your package. Use the t-function to keep it translatable. getPackageName: The same as above, just a bit shorter. install: You could remove this method in the controller above, since we're only calling its parent method and don't check anything else. It has no influence, but we'll need this method later when we put blocks in our package. It's just a skeleton for the next steps at the moment. Moving templates into package Remember the templates we've created? We placed them in the top level blocks directory. Worked like a charm but imagine what happens when you create a theme which also needs some block templates in order to make sure the blocks look like the theme? You'd have to copy files into the blocks directory as well as themes. This is exactly what we're trying to avoid with packages. It's rather easy with templates; they work almost anywhere. You just have to copy the folder slideshow from blocks to packages/c5book/blocks, as shown in the following screenshot: This step was even easier than most things we did before. We simply moved our templates into a different directory—nothing else. concrete5 looks for custom templates in different places like: concrete/blocks/<block-name>/templates blocks/<block-name>/templates packages/<package-name>/blocks/<block-name>/templates It doesn't matter where you put your templates, concrete5 will find them. Moving themes and blocks into the package Now that we've got our templates in the package, let's move the new blocks we've created into that package as well. The process is similar, but we have to call a method in the installer which installs our block. concrete5 does not automatically install blocks within packages. This means that we have to extend the empty install method shown earlier. Before we move the blocks into the package you should remove all blocks first. To do this, go to your dashboard, click on Add Functionality, click on the Edit button next to the block you want to move, and click on the Remove button in the next screen. We'll start with the jqzoom block. Please note; removing a block will of course, remove all the blocks you've added to your pages. Content will be lost if you move a block into a package after you've already used it. Time for action – moving jQZoom block into the package Carry out the following steps: As mentioned earlier, remove the jqzoom block from you website by using the Add Functionality section in your dashboard. Move the directory blocks/jqzoom to packages/c5book/blocks. Open the package controller we created a few pages earlier; you can find it at packages/c5book/controller.php. The following snippet shows only a part of the controller, the install method. The only thing you have to do is insert the highlighted line: public function install() { $pkg = parent::install(); // install blocks BlockType::installBlockTypeFromPackage('jqzoom', $pkg); } Save the file and go to your dashboard again. Select Add Functionality and locate the c5book package; click on Edit and then Uninstall Package and confirm the process on the next screen. Back on the Add Functionality screen, reinstall the package again, which will automatically install the block. What just happened? Besides moving files, we only had to add a single line of code to our existing package controller. This is necessary, because blocks within packages aren't automatically installed. When installing a package, only the install method of the controller is called, exactly the place where we hook into and install our block. The installBlockTypeFromPackage method takes two parameters: The block handle and the package object. However, this doesn't mean that packages behave like namespaces. What does this mean? A block is connected to a package. This is necessary in order to be able to uninstall the block when removing the package along with some other reasons. Even though there's a connection between the two objects, a block handle must be unique across all packages. You've seen that we had to remove and reinstall the package several times while we only moved a block. At this point, it probably looks a bit weird to do that, especially as you're going to lose some content on your website. However, when you're more familiar with the concrete5 framework, you'll usually know if you're going to need a package and make that decision before you start creating new blocks. If you're still in doubt, don't worry about it too much and create a package and not just a block. Using a package is usually the safest choice. Don't forget that all instances of a block will be removed from all pages when you uninstall the block from your website. Make sure your package structure doesn't change before you start adding content to your website. Time for action - moving the PDF block into the package Some blocks depend on helpers, files and libraries, which aren't in the block directory. The PDF generator block is such an example. It depends on a file found in the tools directory in the root of your concrete5 website. How do we include such a file in a package? Move the pdf directory from blocks to packages/c5book/blocks since we also want to include the block in the package. Locate the c5book directory within packages and create a new subdirectory named tools. Move generate_pdf.php from tools to packages/c5book/tools. Create another directory named libraries in packages/c5book. Move the mpdf50 from libraries to packages/c5book/libraries. As we've moved two objects, we have to make sure our code looks for them in the right place. Open packages/c5book/tools/generate.php and look for Loader::library at the beginning of the file. We have to add a second parameter to Loader::library, as shown here: <?php defined('C5_EXECUTE') or die(_("Access Denied.")); Loader::library('mpdf50/mpdf', 'c5book'); $fh = Loader::helper('file'); $header = <<<EOT <style type="text/css"> body { font-family: Helvetica, Arial; } h1 { border-bottom: 1px solid black; } </style> EOT; Next, open packages/c5book/blocks/pdf/view.php. We have to add the package handle as the second parameter to make sure the tool file is loaded from the package. <!--hidden_in_pdf_start--> <?php defined('C5_EXECUTE') or die(_('Access Denied.')); $nh = Loader::helper('navigation'); $url = Loader::helper('concrete/urls'); $toolsUrl = $url->getToolsURL('generate_pdf', 'c5book'); $toolsUrl .= '?p=' . rawurlencode($nh->getLinkToCollection($this- >c, true)); echo "<a href="{$toolsUrl}">PDF</a>"; ?> <!--hidden_in_pdf_end--> What just happened? In the preceding example, we put got a file in the tools directory and a PDF generator in the libraries directory, which we had to move as well. Even at the risk of saying the same thing several times: A package can contain any element of concrete5—libraries, tools, controllers, images, and so on. By putting all files in a single package directory, we can make sure that all files are installed at once, thus making sure all dependencies are met. Nothing has changed beside the small changes we've made to the commands, which access or load an element. A helper behaves like a helper, no matter where it's located. Have a go hero – move more add-ons We've moved two different blocks into our new package, along with the slideshow block templates. These aren't all blocks we've created so far. Try to move all add-ons we've created into our new package. If you need more information about that process, have a look at the following page: http://www.concrete5.org/documentation/developers/system/packages/
Read more
  • 0
  • 0
  • 3230

article-image-ejb-31-controlling-security-programmatically-using-jaas
Packt
17 Jun 2011
5 min read
Save for later

EJB 3.1: Controlling Security Programmatically Using JAAS

Packt
17 Jun 2011
5 min read
  EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes The reader is advised to refer the initial two recipies from the previous article on the process of handling security using annotations. Getting ready Programmatic security is affected by adding code within methods to determine who the caller is and then allowing certain actions to be performed based on their capabilities. There are two EJBContext interface methods available to support this type of security: getCallerPrincipal and isCallerInRole. The SessionContext object implements the EJBContext interface. The SessionContext's getCallerPrincipal method returns a Principal object which can be used to get the name or other attributes of the user. The isCallerInRole method takes a string representing a role and returns a Boolean value indicating whether the caller of the method is a member of the role or not. The steps for controlling security programmatically involve: Injecting a SessionContext instance Using either of the above two methods to effect security How to do it... To demonstrate these two methods we will modify the SecurityServlet to use the VoucherManager's approve method and then augment the approve method with code using these methods. First modify the SecurityServlet try block to use the following code. We create a voucher as usual and then follow with a call to the submit and approve methods. out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); boolean voucherApproved = voucherManager.approve(); if(voucherApproved) { out.println("<h3>Voucher was approved</h3>"); } else { out.println("<h3>Voucher was not approved</h3>"); } out.println("<h3>Voucher name: " + voucherManager.getName() + "</h3>"); out.println("</body>"); out.println("</html>"); Next, modify the VoucherManager EJB by injecting a SessionContext object using the @Resource annotation. public class VoucherManager { ... @Resource private SessionContext sessionContext; Let's look at the getCallerPrincipal method first. This method returns a Principal object (java.security.Principal) which has only one method of immediate interest: getName. This method returns the name of the principal. Modify the approve method so it uses the SessionContext object to get the Principal and then determines if the name of the principal is "mary" or not. If it is, then approve the voucher. public boolean approve() { Principal principal = sessionContext.getCallerPrincipal(); System.out.println("Principal: " + principal.getName()); if("mary".equals(principal.getName())) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the SecurityApplication using "mary" as the user. The application should approve the voucher with the output as shown in the following screenshot: Execute the application again with a user of "sally". This execution will result in an exception. INFO: Access exception The getCallerPrincipal method simply returns the principal. This frequently results in the need to explicitly include the name of a user in code. The hard coding of user names is not recommended. Checking against each individual user can be time consuming. It is more efficient to check to see if a user is in a role. The isCallerInRole method allows us to determine whether the user is in a particular role or not. It returns a Boolean value indicating whether the user is in the role specified by the method's string argument. Rewrite the approve method to call the isCallerInRole method and pass the string "manager" to it. If the return value returns true, approve the voucher. public boolean approve() { if(sessionContext.isCallerInRole("manager")) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the application using both "mary" and "sally". The results of the application should be the same as the previous example where the getCallerPrincipal method was used. How it works... The SessionContext class was used to obtain either a Principal object or to determine whether a user was in a particular role or not. This required the injection of a SessionContext instance and adding code to determine if the user was permitted to perform certain actions. This approach resulted in more code than the declarative approach. However, it provided more flexibility in controlling access to the application. These techniques provided the developer with choices as to how to best meet the needs of the application. There's more... It is possible to take different actions depending on the user's role using the isCallerInRole method. Let's assume we are using programmatic security with multiple roles. @DeclareRoles ({"employee", "manager","auditor"}) We can use a validateAllowance method to accept a travel allowance amount and determine whether it is appropriate based on the role of the user. public boolean validateAllowance(BigDecimal allowance) { if(sessionContext.isCallerInRole("manager")) { if(allowance.compareTo(BigDecimal.valueOf(2500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("employee")) { if(allowance.compareTo(BigDecimal.valueOf(1500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("auditor")) { if(allowance.compareTo(BigDecimal.valueOf(1000)) <= 0) { return true; } else { return false; } } else { return false; } } The compareTo method compares two BigDecimal values and returns one of three values: -1 – If the first number is less than the second number 0 – If the first and second numbers are equal 1 – If the first number is greater than the second number The valueOf static method converts a number to a BigDecimal value. The value is then compared to allowance. Summary This article covered programmatic EJB security based upon the Java Authentication and Authorization Service (JAAS) API. Further resources on this subject: EJB 3.1: Introduction to Interceptors [Article] EJB 3.1: Working with Interceptors [Article] Hands-on Tutorial on EJB 3.1 Security [Article] EJB 3 Entities [Article] Developing an EJB 3.0 entity in WebLogic Server [Article] Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] NetBeans IDE 7: Building an EJB Application [Article]
Read more
  • 0
  • 0
  • 3223

article-image-working-templates-apache-roller-40
Packt
28 Dec 2009
5 min read
Save for later

Working with Templates in Apache Roller 4.0

Packt
28 Dec 2009
5 min read
Your first template In essence, a theme is a set of templates, and a template is composed of HTML and Velocity code. You can make your own templates to access your weblog's data and show this to your visitors in any way you want. Creating and editing templates In Apache Roller, you can create, edit, or delete templates via the Frontpage: Templates page. Let's see how to use this wonderful tool to create and edit your own templates! Time for action – creating your first template In this exercise, you'll learn to create and edit your first custom template via Roller's admin interface: Open your web browser, log into Roller, and go to the Templates page, under the Design tab: On the Add a new template panel, type mytemplate in the Name field, leave the default custom value in the Action field, and click on the Add button: The mytemplate template you've just created will show up in the templates list: Now click on the mytemplate link under the Name field, to open the mytemplate file for editing: Leave the mytemplate value for the Name field, type mytemplate in the Link field, and type My First Template in Apache Roller! in the Description field: Then replace the <html><body></body></html> line with the following HTML code: <html><body>Welcome to my blog, <b>$model.weblog.name</b> </br>This is my first template </br>My weblog's absolute URL is: <b>$url.absoluteSite</b> </br></body></html> This is shown in the following screenshot: Scroll down the page and click on the Save button to apply the changes to your new template. Roller will show the Template updated successfully message inside a green box to confirm that your changes were saved: Now click on the [launch] link under the Link field to open a new tab in your web browser and see your template in action: You can close this tab now, but leave the Frontpage: Templates window open for the next exercise. What just happened? Now you know how to create your own templates! Although the previous example is very simple, you can use it as a starting point to create very complex templates. As I said before, templates are composed of HTML and Velocity code. The template we wrote in the previous exercise uses a few basic HTML elements, or tags: HTML Tag Definition Tip <html> , </html> Defines the start/end of an HTML document. You must write this tags at the beginning/end of each Roller template. <body>, </body> Defines the start/end of an HTML document's body. All the code you will write for your templates must go between the <body> and </body> tags. <b>, </b> Shows text in bold. Example: <b>Hello</b> shows up as Hello </br> Indicates a line break. Example: Hello</br>World shows up as Hello World Also, there are some elements from the Velocity Template Language, along with an example from the previous exercise:   Velocity Element Definition Example $model.weblog.name Shows the name of your weblog. <b>$model.weblog.name</b> shows up as Ibacsoft's Weblog $url.absoluteSite Shows the absolute URL of your weblog <b>$url.absoluteSite</b> shows up as http://alromero.no-ip.org/roller   These are just some of the basic HTML tags and Velocity elements you'll learn to use for your templates. In the following sections, we'll see some more, along with elements from the Velocity Template Language. The Velocity template language All templates in Roller use HTML tags, along with Velocity code. In the next subsections, you'll learn about some of the most widely used Velocity elements in your Roller templates. Using Velocity macros in your Roller weblog A macro in Velocity is a set of instructions that generate HTML code based on data from your weblog. They are very helpful when you need to do the same task more than once. In the following exercise, you'll learn to use some macros included in Roller in order to show your weblog data to your visitors. Time for action – showing your weblog's blogroll and most recent entries Now you will use the Velocity Template Language to show your weblog's bookmarks (blogroll) in your custom template, along with the most recent entries: Go to your custom template editing page, and type the following code just above the </body></html> line: </br>These are my favorite Web sites: </br>#set($rootFolder = $model.weblog.getBookmarkFolder("/"))#showBookmarkLinksList($rootFolder false false)
Read more
  • 0
  • 0
  • 3222

article-image-anatomy-typo3-extension
Packt
14 Oct 2009
8 min read
Save for later

Anatomy of TYPO3 Extension

Packt
14 Oct 2009
8 min read
TYPO3 Extension Categories All TYPO3 extensions are classified into several predefined categories. These categories do not actually differentiate the extensions. They are more like hints for users about extension functionality. Often, it is difficult for the developer to decide which category an extension should belong to. The same extension can provide PHP code that fits into many categories. An extension can contain Frontend (FE) plugins, Backend (BE) modules, static data, and services, all at once. While it is not always the best solution to make such a monster extension, sometimes it is necessary. In this case, the extension author should choose the category that best fits the extension's purpose. For example, if an extension provides a reservation system for website visitors, it is probably FE related, even if it includes a BE module for viewing registrations. If an extension provides a service to log in users, it is most likely a service extension, even if it logs in FE users. It will be easier to decide where the extension fits after we review all the extension categories in this article. Choosing a category for an extension is mandatory. While the TYPO3 Extension Manager can still display extensions without a proper category, this may change and such extensions may be removed from TER (TYPO3 Extension Repository) in the future. The extension category is visible in several places. Firstly, extensions are sorted and grouped by category in the Extension Manager. Secondly, when an extension is clicked in the Extension Manager, its category is displayed in the extension details. If an extension's category is changed from one to another, it does not affect extension functionality. The Extension Manager will show the extension in a different category. So, categories are truly just hints for the user. They do not have any significant meaning in TYPO3. So, why do we care and talk about them? We do so because it is one of those things that make a good extension. If an extension developer starts making a new extension, they should do it properly from the very beginning. And one of the first things to do properly is to decide where an extension belongs. So, let's look into the various extension categories in more detail. Category: Frontend Extensions that belong to the Frontend category provide functionality related to the FE. It does not mean that they generate website output. Typically, extensions from the FE category extend FE functionality in other ways. For example, they can transform links from standard /index.php?id=12345 to /news/new-typo3-bookis-out.htm. Or, they can filter output and clean it up, compress, add or remove HTML comments, and so on. Often, these extensions use one or more hooks in the FE classes. For example, TSFE has hooks to process submitted data, or to post‑filter content (and many others). Examples of FE extensions are source_optimization and realurl. Category: Frontend plugins Frontend plugins is possibly the most popular extension category. Extensions from this category typically generate content for the website. They provide new content objects, or extend existing types of content objects. Typical examples of extensions from the Frontend plugins category are tt_news, comments, ratings, etc. Category: Backend Extensions from the Backend category provide additional functionality for TYPO3 Backend. Often, they are not seen inside TYPO3 BE, but they still do some work. Examples of such extensions are various debugging extensions (such as rlmp_ filedevlog) and extensions that add or change the pop-up menu in the BE (such as extra_page_cm_options system extension). This category is rarely used because extensions belonging to it are very special. Category: Backend module Extensions from this category provide additional modules for TYPO3 BE. Typical examples are system extensions such as beuser (provides Tools | Users module) or tstemplate (provides Web | Template module). Category: Services Services extend core TYPO3 functionality. Most known and most popular service extensions are authentication services. TYPO3 Extension Repository contains extensions to authenticate TYPO3 users over phpBB, vBulletine, or LDAP user databases. Services are somewhat special and will not be covered in this article. Extension developers who are interested in the development of services should consult appropriate documentation on the typo3.org website. Category: Examples Extensions from this category provide examples. There are not many, and are typically meant for beginners or for those who want to learn a specific feature of TYPO3, or features that another TYPO3 extension provides. Category: Templates Extensions from this category provide templates. Most often, they have preformatted HTML and CSS files in order to use them with the templateautoparser extension or map with TemplaVoila. Sometimes, they also contain TypoScript templates, for example, tmpl_andreas01 and tmpl_andreas09 extensions. Once installed, they provide pre‑mapped TemplaVoila templates for any website, making it easy to have a website up and running within minutes. Category: Documentation Documentation extensions provide TYPO3 documentation. Normally, TYPO3 extensions contain documentation within themselves, though sometimes, a document is too big to be shipped with extensions. In such cases, it is stored separately. There is an unofficial convention to start an extension key for such extensions with the doc_ prefix (that is, doc_indexed_search). Category: Miscellaneous Everything else that does not fit into any other category goes here; typical examples are skins. But do not put your extension here if you just cannot decide where it fits. In all probability, it should go into one of the other categories, not into Miscellaneous. Extension Files TYPO3 extensions consist of several files. Some of these files have predefined names, and serve a predefined purpose. Others provide code or data but also follow certain naming conventions. We will review all the predefined files in this article and see what purpose they serve. We will look into the files according to their logical grouping. While reading this section, you can take any extension from the typo3conf/ext/ directory at your TYPO3 installation and check the contents of each discussed file. Some files may be missing if the extension does not use them. There is only one file which is mandatory for any TYPO3 extension, ext_emconf.php. We will start examining files starting from this one. Common Files All files from this group have predefined names, and TYPO3 expects to find certain information in them. Hacking these files to serve another purpose or to have a different format usually results in incompatibility with other extensions or TYPO3 itself. While it may work in one installation, it may fail in others. So, avoid doing anything non-standard with these files. ext_emconf.php This is the only required file for any TYPO3 extension. And this is the only file that should be modified with great care. If it is corrupt, TYPO3 will not load any extension. This file contains information on the TYPO3 Extension Manager. This information tells the Extension Manager what the extension does, provides, requires, and conflicts with. It also contains a checksum for each file in the extension. This checksum is updated automatically when the extension is sent to TER (TYPO3 Extension Repository). The server administrator can easily check if anyone has hijacked the extension files by looking into the extension details in the Extension Manager. The modified files are shown in red. Here is a tip. If you (as an extension developer) send your own extension directly to the customer (bypassing TER upload), or plan to use it on your own server, always update the ext_emconf.php file using the Backup/Delete function of the Extension Manager. This will ensure that TYPO3 shows up-to-date data in the Extension Manager. Here is an example of a ext_emconf.php file from the smoothuploader extension: <?php ############################################################# # Extension Manager/Repository config file for ext: ↵ # "smoothuploader" # Auto generated 29-02-2008 12:36 # Manual updates: # Only the data in the array - anything else is removed by ↵ # next write. # "version" and "dependencies" must not be touched! ############################################################# $EM_CONF[$_EXTKEY] = array( 'title' => 'SmoothGallery Uploader', 'description' => 'Uploads images to SmoothGallery', 'category' => 'plugin', 'author' => 'Dmitry Dulepov [Netcreators]', 'author_email' => 'dmitry@typo3.org', 'shy' => '', 'dependencies' => 'rgsmoothgallery', 'conflicts' => '', 'priority' => '', 'module' => '', 'state' => 'beta', 'internal' => '', 'uploadfolder' => 0, 'createDirs' => '', 'modify_tables' => 'tx_rgsmoothgallery_image', 'clearCacheOnLoad' => 0, 'lockType' => '', 'author_company' => 'Netcreators BV', 'version' => '0.3.0', 'constraints' => array( 'depends' => array( 'rgsmoothgallery' => '1.1.1-', ), 'conflicts' => array( ), 'suggests' => array( ), ), '_md5_values_when_last_written' => 'a:12:{s:9:...;}', 'suggests' => array( ), ); ?> The variable _md5_values_when_last_written is shortened in the listing above.
Read more
  • 0
  • 0
  • 3221
article-image-integrating-moodle-20-mahara-and-googledocs-business
Packt
29 Apr 2011
9 min read
Save for later

Integrating Moodle 2.0 with Mahara and GoogleDocs for Business

Packt
29 Apr 2011
9 min read
Moodle 2.0 for Business Beginner's Guide Implement Moodle in your business to streamline your interview, training, and internal communication processes         The Repository integration allows admins to set up external content management systems and use them to complement Moodle's own file management system. Using this integration you can now manage content outside of Moodle and publish it to the system once the document or other content is ready. The Portfolio integration enables users to store their Moodle content in an external e-portfolio system to share with evaluators, peers, and others. Using Google Docs as a repository for Moodle A growing number of organizations are using Google Docs as their primary office suite. Moodle allows you to add Google Docs as a repository so your course authors can link to word processing, spreadsheet, and presentation and form documents on Google Docs. Time for action - configuring the Google Docs plugin To use Google Docs as a repository for Moodle, we first need to configure the plugin like we did with Alfresco. Login to Moodle as a site administrator. From the Site Administration menu, select Plugins and then Repositories. Select Manage Repositories from the Repositories menu. Next to the Google Docs plugin, select Enabled and Visible from the Active menu. On the Configure Google Docs plugin page, give the plugin a different name if you refer to Google Docs as something different in your organization. Click on Save. What just happened You have now set up the Google Docs repository plugin. Each user will have access to their Google Docs account when they add content to Moodle. Time for action - adding a Google Doc to your Moodle course After you have configured the Google Docs plugin, you can add Google Docs to your course. Login to Moodle as a user with course editing privileges. Turn on the editing mode and select File from the Add a resource.. menu in the course section where you want the link to appear. Give the file a name. Remember the name will be the link the user selects to get the file, so be descriptive. Add a description of the file. In the Content section, click the Add.. button to bring up the file browser. Click the Google Docs plugin in the File Picker pop-up window. The first time you access Google Docs from Moodle, you will see a login button on the screen. Click the button and Moodle will take you to the Google Docs login page. Login to Google Docs. Docs will now display a security warning, letting you know an external application (Moodle) is trying to access your file repository. Click on the Grant Access button at the bottom of the screen. Now you will be taken back to the File Picker. Select the file you want to link to your course. If you want to rename the document when it is linked to Moodle, rename it in the Save As text box. Then edit the Author field if necessary and choose a copyright license. Click on Select this file. Select the other options for the file as described in Getting Started with Moodle 2.0 for Business. Click on Save and return to course. What just happened You have now added a Google Doc to your Moodle course. You can add any of the Google Doc types to your course and share them with Moodle users. Google Docs File Formats The Moodle Google Docs plugin makes a copy of the document in a standard office format (rtf, xls, or ppt). When you save the file, any edits to the document after you save it to Moodle will not be displayed. Have a go hero Try importing the other Google Docs file formats into your Moodle course and test the download. Time for reflection Using Google Docs effectively requires clear goals, planning, integration with organizational workflows, and training. If you want to link Moodle with an external content repository, how will you ensure the implementation is successful? What business processes could you automate by using one of these content services? Exporting content to e-portfolios Now that we've integrated Moodle with external content repositories it's time to turn our attention to exporting content from Moodle. The Moodle 2 portfolio system allows users to export Moodle content in standard formats, so they can share their work with other people outside of Moodle, or organize their work into portfolios aimed at a variety of audiences. In a corporate environment, portfolios can be used to demonstrate competency for promotion or performance measurement. They can also be used as a directory of expertise within a company, so others can find people they need for special projects. One of the more popular open source portfolio systems is called Mahara. Mahara is a dedicated e-portfolio system for creating collections of work and then creating multiple views on those collections for specific audiences. It also includes a blogging platform, resume builder, and social networking tools. In recent versions, Mahara has begun to incorporate social networking features to enable users to find others with similar interests or specific skill sets. To start, we'll briefly look at installing Mahara, then work through the integration of Moodle with Mahara. Once we've got the two systems talking to each other, we can look at how to export content from Moodle to Mahara and then display it in an e-portfolio. Time for action - installing Mahara Mahara is a PHP and MySQL application like Moodle. Mahara and Moodle share a very similar architecture, and are designed to be complementary in many respects. You can use the same server setup we've already created for Moodle in Getting Started with Moodle 2.0 for Business. However, we need to create a new database to house the Mahara data as well as ensure Mahara has its own space to operate. Go to http://mahara.org. There is a Download link on the right side of the screen. Download the latest stable version (version 1.3 as of this writing). You will need version 1.3 or later to fully integrate with Moodle 2. For the best results, follow the instructions on the Installing Mahara wiki page, http://wiki.mahara.org/System_Administrator%27s_Guide/Installing_Mahara. If you are installing Mahara on the same personal machine as Moodle, be sure to put the Mahara folder at your web server's root level and keep it separate from Moodle. Your URL for Mahara should be similar to your URL for Moodle. What just happened You have now installed Mahara on your test system. Once you have Mahara up and running on your test server, you can begin to integrate Mahara with Moodle. Time for action - configuring the networking and SSO To begin the process of configuring Moodle and Mahara to work together, we need to enable Moodle Networking. You will need to make sure you have xmlrpc, curl, and openssl installed and configured in your PHP build. Networking allows Moodle to share users and authentication with another system. In this case, we are configuring Moodle to allow Moodle users to automatically login to Mahara when they login to Moodle. This will create a more seamless experience for the users and enable them to move back and forth between the systems. The steps to configure the Mahara portfolio plugin are as follows: From the Site administration menu, select Advanced features. Find the Networking option and set it to On. Select Save changes. The Networking option will then appear in the site admin menu. Select Networking, then Manage Peers. In the Add a new host form, copy the URL of your Mahara site into the hostname field and then select Mahara as the server type. Open a new window and login to your Mahara site as the site admin. Select the Site Admin tab. On your Mahara site, select Configure Site. Then select Networking. Copy the public key from the BEGIN tag to the END CERTIFICATE and paste it into the Public Key field in the Moodle networking form. On the resulting page, select the Services tab to set up the services necessary to integrate the portfolio. You will now need to configure the SSO services. Moodle and Mahara can make the following services available for the other system to consume. Moodle/Mahara Services Descriptions Remote enrollment service: Publish: If you Publish the Remote Enrollment Service, Mahara admins will be able to enroll students in Moodle courses. To enable this, you must also publish to the Single Sign On Service Provider service. Subscribe: Subscribe allows you to remotely enroll students in courses on the remote server. It doesn't apply in the context of Mahara. Portfolio Services: You must enable both Publish and Subscribe to allow users to send content to Mahara. SSO: (Identity Provider) If you Publish the SSO service, users can go from Moodle to Mahara without having to login again. If you Subscribe to this service, users can go from Mahara to Moodle without having to login again. SSO: (Service Provider) This is the converse of Identity Provider service. If you enabled Publish previously, you must enable Subscribe here. If you enabled Subscribe previously, you must enable Publish here. Click on Save changes. What just happened You have just enabled Single Sign-On between Moodle and Mahara. We are now halfway through the setup and now we can configure the Mahara to listen for Moodle users. Have a go hero Moodle Networking is also used to enable Moodle servers to communicate with each other. The Moodle Hub system is designed on top of Moodle networking to enable teachers to share courses with each other, and enable multiple Moodle servers to share users. How could you use this feature to spread Moodle within your organization? Could you create an internal and an external facing Moodle and have them talk to each other? Could different departments each use a Moodle and share access to courses using Moodle networking? For your "have a go hero" activity, design a plan to use Moodle networking within your organization.
Read more
  • 0
  • 0
  • 3221

article-image-angular-zen
Packt
19 Sep 2013
5 min read
Save for later

Angular Zen

Packt
19 Sep 2013
5 min read
(For more resources related to this topic, see here.) Meet AngularJS AngularJS is a client-side MVC framework written in JavaScript. It runs in a web browser and greatly helps us (developers) to write modern, single-page, AJAX-style web applications. It is a general purpose framework, but it shines when used to write CRUD (Create Read Update Delete) type web applications. Getting familiar with the framework AngularJS is a recent addition to the client-side MVC frameworks list, yet it has managed to attract a lot of attention, mostly due to its innovative templating system, ease of development, and very solid engineering practices. Indeed, its templating system is unique in many respects: It uses HTML as the templating language It doesn't require an explicit DOM refresh, as AngularJS is capable of tracking user actions, browser events, and model changes to figure out when and which templates to refresh It has a very interesting and extensible components subsystem, and it is possible to teach a browser how to interpret new HTML tags and attributes The templating subsystem might be the most visible part of AngularJS, but don't be mistaken that AngularJS is a complete framework packed with several utilities and services typically needed in single-page web applications. AngularJS also has some hidden treasures, dependency injection (DI) and strong focus on testability. The built-in support for DI makes it easy to assemble a web application from smaller, thoroughly tested services. The design of the framework and the tooling around it promote testing practices at each stage of the development process. Finding your way in the project AngularJS is a relatively new actor on the client-side MVC frameworks scene; its 1.0 version was released only in June 2012. In reality, the work on this framework started in 2009 as a personal project of Miško Hevery, a Google employee. The initial idea turned out to be so good that, at the time of writing, the project was officially backed by Google Inc., and there is a whole team at Google working full-time on the framework. AngularJS is an open source project hosted on GitHub (https://github.com/angular/angular.js) and licensed by Google, Inc. under the terms of the MIT license. The community At the end of the day, no project would survive without people standing behind it. Fortunately, AngularJS has a great, supportive community. The following are some of the communication channels where one can discuss design issues and request help: angular@googlegroups.com mailing list (Google group) Google + community at https://plus.google.com/u/0/communities/115368820700870330756 #angularjs IRC channel [angularjs] tag at http://stackoverflow.com AngularJS teams stay in touch with the community by maintaining a blog (http://blog.angularjs.org/) and being present in the social media, Google + (+ AngularJS), and Twitter (@angularjs). There are also community meet ups being organized around the world; if one happens to be hosted near a place you live, it is definitely worth attending! Online learning resources AngularJS has its own dedicated website (http://www.angularjs.org) where we can find everything that one would expect from a respectable framework: conceptual overview, tutorials, developer's guide, API reference, and so on. Source code for all released AngularJS versions can be downloaded from http://code.angularjs.org. People looking for code examples won't be disappointed, as AngularJS documentation itself has plenty of code snippets. On top of this, we can browse a gallery of applications built with AngularJS (http://builtwith.angularjs.org). A dedicated YouTube channel (http://www.youtube.com/user/angularjs) has recordings from many past events as well as some very useful video tutorials. Libraries and extensions While AngularJS core is packed with functionality, the active community keeps adding new extensions almost every day. Many of those are listed on a dedicated website: http://ngmodules.org. Tools AngularJS is built on top of HTML and JavaScript, two technologies that we've been using in web development for years. Thanks to this, we can continue using our favorite editors and IDEs, browser extensions, and so on without any issues. Additionally, the AngularJS community has contributed several interesting additions to the existing HTML/JavaScript toolbox. Batarang Batarang is a Chrome developer tool extension for inspecting the AngularJS web applications. Batarang is very handy for visualizing and examining the runtime characteristics of AngularJS applications. We are going to use it extensively in this article to peek under the hood of a running application. Batarang can be installed from the Chrome's Web Store (AngularJS Batarang) as any other Chrome extension. Plunker and jsFiddle Both Plunker (http://plnkr.co) and jsFiddle (http://jsfiddle.net) make it very easy to share live-code snippets (JavaScript, CSS, and HTML). While those tools are not strictly reserved for usage with AngularJS, they were quickly adopted by the AngularJS community to share the small-code examples, scenarios to reproduce bugs, and so on. Plunker deserves special mentioning as it was written in AngularJS, and is a very popular tool in the community. IDE extensions and plugins Each one of us has a favorite IDE or an editor. The good news is that there are existing plugins/extensions for several popular IDEs such as Sublime Text 2 (https://github.com/angular-ui/AngularJS-sublime-package), Jet Brains' products (http://plugins.jetbrains.com/plugin?pr=idea&pluginId=6971), and so on.
Read more
  • 0
  • 0
  • 3216

article-image-content-rules-syndication-and-advanced-features-plone-3-intranet
Packt
27 Jul 2010
8 min read
Save for later

Content Rules, Syndication, and Advanced Features of Plone 3 Intranet

Packt
27 Jul 2010
8 min read
(For more resources on Plone, see here.) Content rules Plone features a usability layer around Zope's event system, allowing plain users to create rules tied to the most used event handlers. These rules are composed of tasks that get triggered when an event is raised in our site. Content rules are defined site-wide in the Content rules configlet, and they are available for use in any folderish object in our site. Once the rule is created, it can be locally assigned to any folder object in the site. Rules play a very important role in intranets. We can use them as a mechanism for notification, and they also help in adding dynamism to our intranet. One of the most demanded features in an intranet is the ability to be aware when content is added, changed, or even deleted. The notification of this change to the users can be achieved via content rules assigned strategically, or by user demand in any folder or intranet application, such as forums or in a blog. We can use content types to help us model some of our corporate processes or daily tasks. Move or copy objects to other folders (done by users), just in case some of our processes require this kind of an action. We can find other interesting uses of content rules in our intranet, such as executing an action when a state transition is triggered. All these actions can be carried out programmatically, but the power of content rules lie in that they can be executed thorough the Plone UI and by any experienced user. We can access the manage rules form via the Rules tab in any folder. If we don't have any rules created, the form will address us to create them in the content rules configlet. This control panel configlet will aid us to create and manage content rules of our site: The form is divided into two parts. The first is dedicated to global settings applied to all rules. In this version, there is only one setting in this category to enable and disable the rules in the whole site. If deselected, the whole rule system is disabled and no rules will be executed in the site. The other part of the form is reserved for the rule management interface. Here we can find the already created rules, manage them, and create new ones. We can display them by type using the selector on the right. Adding a new rule Click on the Add content rule button. It will open a new form with the following fields: Title: Title of the rule. Description: Summary of the rule. Triggering event: Starts the execution of the rule. Enabled: Whether or not this rule is enabled. Stop executing rules: Defines if the engine should continue the execution of other rules. It is useful if we assign several rules to a container and the execution of a particular rule excludes any other rule execution. By default, these are the available events: Object added to this container Object modified Object removed from this container Workflow state changed After creating one rule at least, the configlet will let us manage the existing rules, allowing us to perform the standard edit, delete, enable, and disable actions. But this is only the first step. We've created the rule and assigned an event to it. Now it's time to configure the task, which the rule will perform. There are two items to configure—conditions and actions. We can add as many conditions as we want to, and modify the order in which they can be applied. We can add the following types of conditions: Content type: Apply the rule only if an object of this type has triggered the event File extension: Execute the action only if a file content type that has this extension has triggered the event Workflow state: Apply only if a content type in the workflow state specified has triggered the event User's group: Execute only if a user member of a specific group triggers the event User's role: Same as User's group, but by a user having a specific role in that context The actions that a rule can execute are limited but they cover the most useful use cases: Logger: Output a message to the message system log Notify user: Notify the user via a status message Copy to folder: The object that triggers the event is copied to the specified folder Move to folder: The object that triggers the event is moved to the specified folder Delete object: The object that triggers the event is deleted Transition workflow state: An attempt to change workflow of the object that triggers the event via the specified transition Send e-mail: Send e-mail to a specific user By default, only managers can define and apply new content rules, but we can allow more user roles to access their creation. Assigning rules to folderish objects Once the rule is created, we can assign them to any of Plone's folderish content types. Just go to any folderish object and click on the Rules tab. Just use the drop-down box Assign rule here to choose from the available rules and click on Add. We can review what rules are assigned in this container and manage them as well. We can enable, disable, and choose whether to apply them to subfolders or only to current folders, and of course, unassign them. Making any content type rule aware All folderish default content types of Plone are content rule aware. However, not all third-party content types are content rule aware. This is because either they are old or simply do not enable this feature in the content type declaration. In the case of third-party content types, which are not content rule aware, we can enable their awareness by following these instructions: Add an object of the desired content type anywhere in our site, if we haven't created it yet. Find it in the ZMI and access the Interfaces tab. Once there, find the interface plone.contentrules. engine.interfaces.IRuleAssignable in the Available marker interfaces fieldset. Select it and click on the Add button. By doing so, we are assigning an additional marker interface to that content type, which will enable (mark) this instance of the content type (that is, make it aware of the content rule). From this moment onwards, the selected object will have available the Rules tab, and in consequence, we can assign rules to it. Syndication Plone has always paid special attention to syndication, making its folderish content types syndicable. Collections export their contents automatically in a view that all collections have—RSS view. But we can also enable syndication for single folders on our site. Using RSS feeds in our intranet is the recommended approach for keeping our users posted about the changes in syndicated folders, if they are collections or plain folders. Enabling folder syndication For enabling syndication for a particular folder, we need to access the view, synPropertiesForm, from the folder we want to be syndicable. For example, if we want to access this view in the ITStaff folder, we should browse the URL: http://localhost:8080/intranet/ITStaff/synPropertiesForm This view is hidden by default, although we can make it visible in order to allow users to enable folder syndication by themselves. We can make it visible by accessing the portal_actions tool in the ZMI. Go to the object action category and choose syndication. Then just make this action visible by enabling the Visible attribute and choose who will be able to access this view by selecting the item permissions in the Permissions selection box. Once in the synPropertiesForm form, we should click on the Enable syndication button. Then another form is shown to allow us to configure how the publication of the feed will be performed. Following are the syndication details available: Update period: How often the feed will be updated Update frequency: How many times the update will occur inside the period specified in the previous field Update base: When the update will take place Maximum items: How many items the feed will show Accessing a secure RSS feed Syndication was conceived to access information from public resources. Inside an intranet, it will be very common that the folder we want to enable for syndication will be not published, and in consequence, the feed associated will be private. The problem is that there are few feed readers that support feed authentication and even using them. We will have to enable HTTP authentication in our site's PAS configuration, which is not recommended. So we propose two workarounds. We can use a feed enabled browser to browse our intranet and our feeds as well. With this approach, if we are logged in, then we will have access to authenticated feeds. Firefox and Internet Explorer already have this feature. The second approach is to have a special workflow state for the syndicated folders inside our site for being accessible without authentication as anonymous users. Obviously this workaround will make the folder content visible to anonymous users, and it's not an option when privacy of the contained information is a must.
Read more
  • 0
  • 0
  • 3211
article-image-how-bridge-client-server-gap-using-ajax-part-ii
Packt
15 Oct 2009
7 min read
Save for later

How to Bridge the Client-Server Gap using AJAX (Part II)

Packt
15 Oct 2009
7 min read
AJAX and events Suppose we wanted to allow each dictionary term name to control the display of the definition that follows; clicking on the term name would show or hide the associated definition. With the techniques we have seen so far, this should be pretty straightforward: $(document).ready(function() { $('.term').click(function() { $(this).siblings('.definition').slideToggle(); });}); When a term is clicked, this code finds siblings of the element that have a class of definition, and slides them up or down as appropriate. All seems in order, but a click does nothing with this code. Unfortunately, the terms have not yet been added to the document when we attach the click handlers. Even if we managed to attach click handlers to these items, once we clicked on a different letter the handlers would no longer be attached. This is a common problem with areas of a page populated by AJAX. A popular solution is to rebind handlers each time the page area is refreshed. This can be cumbersome, however, as the event binding code needs to be called each time anything causes the DOM structure of the page to change. We can implement event delegation, actually binding the event to an ancestor element that never changes. In this case, we'll attach the click handler to the document using .live() and catch our clicks that way: $(document).ready(function() { $('.term').live('click', function() { $(this).siblings('.definition').slideToggle(); });}); The .live() method tells the browser to observe all clicks anywhere on the page. If (and only if) the clicked element matches the .term selector, then the handler is executed. Now the toggling behavior will take place on any term, even if it is added by a later AJAX transaction. Security limitations For all its utility in crafting dynamic web applications, XMLHttpRequest (the underlying browser technology behind jQuery's AJAX implementation) is subject to strict boundaries. To prevent various cross-site scripting attacks, it is not generally possible to request a document from a server other than the one that hosts the original page. This is generally a positive situation. For example, some cite the implementation of JSON parsing by using eval() as insecure. If malicious code is present in the data file, it could be run by the eval() call. However, since the data file must reside on the same server as the web page itself, the ability to inject code in the data file is largely equivalent to the ability to inject code in the page directly. This means that, for the case of loading trusted JSON files, eval() is not a significant security concern. There are many cases, though, in which it would be beneficial to load data from a third-party source. There are several ways to work around the security limitations and allow this to happen. One method is to rely on the server to load the remote data, and then provide it when requested by the client. This is a very powerful approach as the server can perform pre-processing on the data as needed. For example, we could load XML files containing RSS news feeds from several sources, aggregate them into a single feed on the server, and publish this new file for the client when it is requested. To load data from a remote location without server involvement, we have to get a bit sneakier. A popular approach for the case of loading foreign JavaScript files is injecting <script> tags on demand. Since jQuery can help us insert new DOM elements, it is simple to do this: $(document.createElement('script')) .attr('src', 'http://example.com/example.js') .appendTo('head'); In fact, the $.getScript() method will automatically adapt to this technique if it detects a remote host in its URL argument, so even this is handled for us. The browser will execute the loaded script, but there is no mechanism to retrieve results from the script. For this reason, the technique requires cooperation from the remote host. The loaded script must take some action, such as setting a global variable that has an effect on the local environment. Services that publish scripts that are executable in this way will also provide an API with which to interact with the remote script. Another option is to use the <iframe> HTML tag to load remote data. This element allows any URL to be used as the source for its data fetching, even if it does not match the host page's server. The data can be loaded and easily displayed on the current page. Manipulating the data, however, typically requires the same cooperation needed for the <script> tag approach; scripts inside the <iframe> need to explicitly provide the data to objects in the parent document. Using JSONP for remote data The idea of using <script> tags to fetch JavaScript files from a remote source can be adapted to pull in JSON files from another server as well. To do this, we need to slightly modify the JSON file on the server, however. There are several mechanisms for doing this, one of which is directly supported by jQuery: JSON with Padding, or JSONP. The JSONP file format consists of a standard JSON file that has been wrapped in parentheses and prepended with an arbitrary text string. This string, the "padding", is determined by the client requesting the data. Because of the parentheses, the client can either cause a function to be called or a variable to be set depending on what is sent as the padding string. A PHP implementation of the JSONP technique is quite simple: <?php print($_GET['callback'] .'('. $data .')');?> Here, $data is a variable containing a string representation of a JSON file. When this script is called, the callback query string parameter is prepended to the resulting file that gets returned to the client. To demonstrate this technique, we need only slightly modify our earlier JSON example to call this remote data source instead. The $.getJSON() function makes use of a special placeholder character, ?, to achieve this. $(document).ready(function() { var url = 'http://examples.learningjquery.com/jsonp/g.php'; $('#letter-g a').click(function() { $.getJSON(url + '?callback=?', function(data) { $('#dictionary').empty(); $.each(data, function(entryIndex, entry) { var html = '<div class="entry">'; html += '<h3 class="term">' + entry['term'] + '</h3>'; html += '<div class="part">' + entry['part'] + '</div>'; html += '<div class="definition">'; html += entry['definition']; if (entry['quote']) { html += '<div class="quote">'; $.each(entry['quote'], function(lineIndex, line) { html += '<div class="quote-line">' + line + '</div>'; }); if (entry['author']) { html += '<div class="quote-author">' + entry['author'] + '</div>'; } html += '</div>'; } html += '</div>'; html += '</div>'; $('#dictionary').append(html); }); }); return false; });}); We normally would not be allowed to fetch JSON from a remote server (examples.learningjquery.com in this case). However, since this file is set up to provide its data in the JSONP format, we can obtain the data by appending a query string to our URL, using ? as a placeholder for the value of the callback argument. When the request is made, jQuery replaces the ? for us, parses the result, and passes it to the success function as data just as if this were a local JSON request. Note that the same security cautions hold here as before; whatever the server decides to return to the browser will execute on the user's computer. The JSONP technique should only be used with data coming from a trusted source.
Read more
  • 0
  • 0
  • 3204

article-image-categories-and-attributes-magento-part-2
Packt
22 Oct 2009
7 min read
Save for later

Categories and Attributes in Magento: Part 2

Packt
22 Oct 2009
7 min read
Time for action: Creating Attributes In this section we will create an Attribute set for our store. First, we will create Attributes. Then, we will create the set. Before you begin Because Attributes are the main tool for describing your Products, it is important to make the best use of them. Plan which Attributes you want to use. What aspects or characteristics of your Products will a customer want to search for? Make those Attributes. What aspects of your Products will a customer want to choose? Make these Attributes, too. Attributes are organized into Attribute Sets. Each set is a collection of Attributes. You should create different sets to describe the different types of Products that you want to sell. In our coffee store, we will create two Attribute Sets: one for Single Origin coffees and one for Blends. They will differ in only one way. For Single Origin coffees, we will have an Attribute showing the country or region where the coffee is grown. We will not have this Attribute for blends because the coffees used in a blend can come from all over the world. Our sets will look like the following: Single Origin Attribute set Blended Attribute set Name Name Description Description Image Image Grind Grind Roast Roast Origin SKU SKU Price Price Size Size   Now, let's create the Attributes and put them into sets. The result of the following directions will be several new Attributes and two new Attribute Sets: If you haven't already, log in to your site's backend, which we call the Administrative Panel: Select Catalog | Attributes | Manage Attributes. list of all the Attributes is displayed. These attributes have been created for you. Some of these Attributes (such as color, cost, and description) are visible to your customers. Other Attributes affect the display of a Product, but your customers will never see them. For example, custom_design can be used to specify the name of a custom layout, which will be applied to a Product's page. Your customers will never see the name of the custom layout. We will add our own attributes to this list. Click the Add New Attribute button. The New Product Attribute page displays: There are two tabs on this page: Properties and Manage Label / Options. You are in the Properties tab. The Attribute Properties section contains settings that only the Magento administrator (you) will see. These settings are values that you will use when working with the Attribute. The Frontend Properties section contains settings that affect how this Attribute will be presented to your shoppers. We will cover each setting on this page. Attribute Code is the name of the Attribute. Your customers will never see this value. You will use it when managing the Attribute. Refer back to the list of Attributes that appeared in Step 2. The Attribute identifier appears in the first column, labelled Attribute Code. The Attribute Code must contain only lowercase letters, numbers, and the underscore character. And, it must begin with a letter. The Scope of this Attribute can be set as Store View, Website, or Global. For now, you can leave it set to the default—Store View. The other values become useful when you use one Magento installation to create multiple stores or multiple web sites. That is beyond the scope of this quick-start guide. After you assign an Attribute set to a Product, you will fill in values for the Attributes. For example, suppose you assign a set that contains the attributes color, description, price, and image. You will then need to enter the color, description, price, and image for that Product. Notice that each of the Attributes in that set is a different kind of data. For color, you would probably want to use a drop-down list to make selecting the right color quick and easy. This would also avoid using different terms for the same color such as "Red" and "Magenta". For description, you would probably want to use a freeform text field. For price, you would probably want to use a field that accepts only numbers, and that requires you to use two decimal places. And for image, you would want a field that enables you to upload a picture. The field Catalog Input Type for Store Owner enables you to select the kind of data that this Attribute will hold: In our example we are creating an Attribute called roast. When we assign this value to a Product, we want to select a single value for this field from a list of choices. So, we will select Dropdown. If you select Dropdown or Multiple Select for this field, then under the Manage Label/Options tab, you will need to enter the list of choices (the list of values) for this field. If you select Yes for Unique Value, then no two products can have the same value for this Attribute. For example, if I made roast a unique Attribute, that means only one kind of coffee in my store could be a Light roast, only one kind of coffee could be a French roast, only one could be Espresso, and so on. For an Attribute such as roast, this wouldn't make much sense. However, if this Attribute was the SKU of the Product, then I might want to make it unique. That would prevent me from entering the same SKU number for two different Products. If you select Yes for Values Required, then you must select or enter a value for this Attribute. You will not be able to save a Product with this Attribute if you leave it blank. In the case of roast, it makes sense to require a value. Our customers would not buy a coffee without knowing what kind of roast the coffee has. Input Validation for Store Owner causes Magento to check the value entered for an Attribute, and confirm that it is the right kind of data. When entering a value for this Attribute, if you do not enter the kind of data selected, then Magento gives you a warning message. The Apply To field determines which Product Types can have this Attribute applied to them. Remember that the three Product Types in Magento are Simple, Grouped, and Configurable. Recall that in our coffee store, if a type of coffee comes in only one roast, then it would be a Simple Product. And, if the customer gets to choose the roast, it would be a Configurable Product. So we want to select at least Simple Product and Configurable Product for the Apply To field: But what about Grouped Product? We might sell several different types of coffee in one package, which would make it a Grouped Product. For example, we might sell a Grouped Product that consists of a pound of Hawaiian Kona and a pound of Jamaican Blue Mountain. We could call this group something like "Island Coffees". If we applied the Attribute roast to this Grouped Product, then both types of coffee would be required to have the same roast. However, if Kona is better with a lighter roast and Blue Mountain is better with a darker roast, then we don't want them to have the same roast. So in our coffee store, we will not apply the Attribute roast to Grouped Products. When we sell coffees in special groupings, we will select the roast for each coffee. You will need to decide which Product Types each Attribute can be applied to. If you are the store owner and the only one using your site, you will know which Attributes should be applied to which Products. So, you can safely choose All Product Types for this setting.
Read more
  • 0
  • 0
  • 3197
Modal Close icon
Modal Close icon