Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-silverstripe-24-adding-some-spice-widgets-and-short-codes
Packt
02 May 2011
10 min read
Save for later

SilverStripe 2.4: Adding Some Spice with Widgets and Short Codes

Packt
02 May 2011
10 min read
SilverStripe 2.4 Module Extension, Themes, and Widgets: Beginner's Guide: RAW Create smashing SilverStripe applications by extending modules, creating themes, and adding widgets Why can't we simply use templates and $Content to accomplish the task? Widgets and short codes generally don't display their information directly like a placeholder does They can be used to fetch external information for you—we'll use Google and Facebook services in our examples Additionally they can aggregate internal information—for example displaying a tag cloud based on the key words you've added to pages or articles Widget or short code? Both can add more dynamic and/or complex content to the page than the regular fields. What’s the difference? Widgets are specialized content areas that can be dynamically dragged and dropped in a predefined area on a page in the CMS. You can't insert a widget into a rich-text editor field, it needs to be inserted elsewhere to a template. Additionally widgets can be customised from within the CMS. Short codes are self-defined tags in squared brackets that are entered anywhere in a content or rich-text area. Configuration is done through parameters, which are much like attributes in HTML. So the main difference is where you want to use the advanced content. Creating our own widget Let's create our first widget to see how it works. The result of this section should look like this: Time for action – embracing Facebook Facebook is probably the most important communication and publicity medium in the world at the moment. Our website is no exception and we want to publish the latest news on both our site and Facebook, but we definitely don't want to do that manually. You can either transmit information from your website to Facebook or you can grab information off Facebook and put it into your website. We'll use the latter approach, so let's hack away: In the Page class add a relation to the WidgetArea class and make it available in the CMS: public static $has_one = array( 'SideBar' => 'WidgetArea', ); public function getCMSFields(){ $fields = parent::getCMSFields(); $fields->addFieldToTab( 'Root.Content.Widgets', new WidgetAreaEditor('SideBar') ); return $fields; } Add $SideBar to templates/Layout/Page.ss in the theme directory, wrapping it inside another element for styling later on (the first and third line are already in the template, they are simply there for context): $Form <aside id="sidebar">$SideBar</aside> </section> <aside> is one of the new HTML5 tags. It's intended for content that is only "tangentially" related to the page's main content. For a detailed description see the official documentation at http://www.w3.org/TR/html-markup/aside.html. Create the widget folder in the base directory. We'll simply call it widget_facebookfeed/. Inside that folder, create an empty _config.php file. Additionally, create the folders code/ and templates/. Add the following PHP class—you'll know the filename and where to store it by now. The Controller's comments haven't been stripped this time, but are included to encourage best practice and provide a meaningful example: <?php class FacebookFeedWidget extends Widget { public static $db = array( 'Identifier' => 'Varchar(64)', 'Limit' => 'Int', ); public static $defaults = array( 'Limit' => 1, ); public static $cmsTitle = 'Facebook Messages'; public static $description = 'A list of the most recent Facebook messages'; public function getCMSFields(){ return new FieldSet( new TextField( 'Identifier', 'Identifier of the Facebook account to display' ), new NumericField( 'Limit', 'Maximum number of messages to display' ) ); } public function Feeds(){ /** * URL for fetching the information, * convert the returned JSON into an array. */ $url = 'http://graph.facebook.com/' . $this->Identifier . '/feed?limit=' . ($this->Limit + 5); $facebook = json_decode(file_get_contents($url), true); /** * Make sure we received some content, * create a warning in case of an error. */ if(empty($facebook) || !isset($facebook['data'])){ user_error( 'Facebook message error or API changed', E_USER_WARNING ); return; } /** * Iterate over all messages and only fetch as many as needed. */ $feeds = new DataObjectSet(); $count = 0; foreach($facebook['data'] as $post){ if($count >= $this->Limit){ break; } /** * If no such messages exists, log a warning and exit. */ if(!isset($post['from']['id']) || !isset($post['id'] || !isset($post['message'])){ user_error( 'Facebook detail error or API changed', E_USER_WARNING ); return; } /** * If the post is from the user itself and not someone * else, add the message and date to our feeds array. */ if(strpos($post['id'], $post['from']['id']) === 0){ $posted = date_parse($post['created_time']); $feeds->push(new ArrayData(array( 'Message' => DBField::create( 'HTMLText', nl2br($post['message']) ), 'Posted' => DBField::create( 'SS_Datetime', $posted['year'] . '-' . $posted['month'] . '-' . $posted['day'] . ' ' . $posted['hour'] . ':' . $posted['minute'] . ':' . $posted['second'] ), ))); $count++; } } return $feeds; } Define the template, use the same filename as for the previous file, but make sure that you use the correct extension. So the file widget_facebookfeed/templates/FacebookFeedWidget.ss should look like this: <% if Limit == 0 %> <% else %> <div id="facebookfeed" class="rounded"> <h2>Latest Facebook Update<% if Limit == 1 %> <% else %>s<% end_if %></h2> <% control Feeds %> <p> $Message <small>$Posted.Nice</small> </p> <% if Last %><% else %><hr/><% end_if %> <% end_control %> </div> <% end_if %> Also create a file widget_facebookfeed/templates/WidgetHolder.ss with just this single line of content: $Content We won't cover the CSS as it's not relevant to our goal. You can either copy it from the final code provided or simply roll your own. Rebuild the database with /dev/build?flush=all. Log into /admin. On each page you should now have a Widgets tab that looks similar to the next screenshot. In this example, the widget has already been activated by clicking next to the title in the left-hand menu. If you have more than one widget installed, you can simply add and reorder all of them on each page by drag-and-drop. So even novice content editors can add useful and interesting features to the pages very easily. Enter the Facebook ID and change the number of messages to display, if you want to. Save and Publish the page. Reload the page in the frontend and you should see something similar to the screenshot at the beginning of this section. allow_url_fopen must be enabled for this to work, otherwise you're not allowed to use remote objects such as local files. Due to security concerns it may be disabled, and you'll get error messages if there's a problem with this setting. For more details see http://www.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen. What just happened? Quite a lot happened, so let's break it down into digestible pieces. Widgets in general Every widget is actually a module, although a small one, and limited in scope. The basic structure is the same: residing in the root folder, having a _config.php file (even if it's empty) and containing folders for code, templates, and possibly also JavaScript or images. Nevertheless, a widget is limited to the sidebar, so it's probably best described as an add-on. We'll take a good look at its bigger brother, the module, a little later. You're not required to name the folder widget_*, but it's a common practice and you should have a good reason for not sticking to it. Common use cases for widgets include tag clouds, Twitter integration, showing a countdown, and so forth. If you want to see what others have been doing with widgets or you need some of that functionality, visit http://www.silverstripe.org/widgets/. Keeping widgets simple In general widgets should work with default settings and if there are additional settings they should be both simple and few in number. While we'll be able to stick to the second part, we can't provide meaningful default settings for a Facebook account. Still, keep this idea in mind and try to adhere to it where possible. Facebook graph API We won't go into details of the Facebook Graph API, but it's a powerful tool—we've just scratched the surface with our example. Looking at the URL http://graph.facebook.com/<username>/feed?limit=5 you only need to know that it fetches the last five items from the user's feed, which consists of the wall posts (both by the user himself and others). <username> must obviously be replaced by the unique Facebook ID—either a number or an alias name the user selected. If you go to the user's profile, you should be able to see it in the URL. For example, SilverStripe Inc's Facebook profile is located at https://www.facebook.com/pages/silverstripe/44641219945?ref=ts&v=wall—so the ID is 44641219945. That's also what we've used for the example in the previous screenshot. For more details on the Graph API see http://developers.facebook.com/docs/api. Connecting pages and widgets First we need to connect our pages and widgets in general. You'll need to do this step whenever you want to use widgets. You'll need to do two things to make this connection: Reference the WidgetArea class in the base page's Model and make it available in the CMS through getCMSFields(). Secondly, we need to place the widget in our page. $SideBar You're not required to call the widget placeholder $SideBar, but it's a convention as widgets are normally displayed on a website's sidebar. If you don't have a good reason to do it otherwise, stick to it. You're not limited to a single sidebar As we define the widget ourselves, we can also create more than one for some or all pages. Simply add and rename the $SideBar in both the View and Model with something else and you're good to go. You can use multiple sidebars in the same region or totally different ones—for example creating header widgets and footer widgets. Also, take the name "sidebar" with a grain of salt, it can really have any shape you want. What about the intro page? Right. We've only added $SideBar to the standard templates/Layout/Page.ss. Shouldn't we proceed and put the PHP code into ContentPage.php? We could, but if we wanted to add the widget to another page type, which we'll create later, we'd have to copy the code. Not DRY, so let's keep it in the general Page.php. The intro page is a bit confusing right now. While you can add widgets in the backend, they can’t be displayed as the placeholder is missing in the template. To clean this up, let's simply remove the Widget tab from the intro page. It's not strictly required, but it prevents content authors from having a field in the CMS that does nothing visible on the website. To do this, simply extend the getCMSFields() in the IntroPage.php file, like this: function getCMSFields() { $fields = parent::getCMSFields(); $fields->removeFieldFromTab('Root.Content.Main', 'Content'); $fields->removeFieldFromTab('Root.Content', 'Widgets'); return $fields; }
Read more
  • 0
  • 0
  • 3543

article-image-meteorjs-javascript-framework-why-meteor-rocks
Packt
08 Feb 2013
18 min read
Save for later

Meteor.js JavaScript Framework: Why Meteor Rocks!

Packt
08 Feb 2013
18 min read
(For more resources related to this topic, see here.) Modern web applications Our world is changing. With continual advancements in display, computing, and storage capacities, what wasn't possible just a few years ago is now not only possible, but critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing, where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data into a readable format (turn a database record into HTML, and so on), and then serve the result to the client, where it's displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or dumb terminal. The design pattern for this is called...wait for it…the client/server design pattern: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it, and has continued to be the design pattern we think of, when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got faster and better. Even more and more beefy. At the same time, the Web was coming alive. People started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app — quite the opposite of a dumb terminal — was called a smart app. There were many business-oriented smart apps created, but the easiest examples are found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or controls) and transforms the data into something to be displayed (the view). This design pattern is simple, but very effective. It's called the Model View Controller (MVC) pattern. The model is all the data. In the context of a smart app, the model is provided by a server. The client makes requests for the model from the server. Once the client gets the model, it performs actions/logic on this data, and then prepares it to be displayed on the screen. This part of the application (talk to the server, modify the data model, and prep data for display) is called the controller. The controller sends commands to the view, which displays the information, and reports back to the controller when something happens on the screen (a button click, for example). The controller receives that feedback, performs logic, and updates the model. Lather, rinse, repeat. Because web browsers were built to be "dumb clients" the idea of using a browser as a smart app was out of the question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes you could run the app inside the browser, sometimes you could download it first, but either way, you were running a new type of web app, where the application could talk to the server and share the processing workload. The browser grows up (MVVM) Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server (controller) was performing business logic on the database information (model) through the use of business objects, and then passing that information on to a client application (a "view"). The client was receiving this information from the server, and treating it as its own personal "model." The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the second MVC. Then came the thought, "why stop at two?" There was no reason an application couldn't have multiple nested MVCs, with each view becoming the model for the next MVC. In fact, on the client side, there's actually a good reason to do so. Separating actual display logic (such as "this submit button goes here" and "the text area changed value") from the client-side object logic (such as "user can submit this record" and "the phone # has changed") allows a large majority of the code to be reused. The object logic can be ported to another application, and all you have to do is change out the display logic to extend the same model and controller code to a different application or device. From 2004-2005, this idea was refined and modified for smart apps (called the presentation model) by Martin Fowler and Microsoft (called the Model View View-Model). While not strictly the same thing as a nested MVC, the MVVM design pattern applied the concept of a nested MVC to the frontend application. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that use the MVVM design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application directly from a browser. No more downloading multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you previously could from buying a packaged product. A giant Meteor appears! Meteor takes the MVVM pattern to the next level. By applying templating through handlebars.js (or other template libraries) and using instant updates, it truly enables a web application to act and perform like a complete, robust smart application. Let's walk through some concepts of how Meteor does this, and then we'll begin to apply this to our Lending Library application. Cached and synchronized data (the model) Meteor supports a cached-and-synchronized data model that is the same on the client and the server. When the client notices a change to the data model, it first caches the change locally, and then tries to sync with the server. At the same time, it is listening to changes coming from the server. This allows the client to have a local copy of the data model, so it can send the results of any changes to the screen quickly, without having to wait for the server to respond. In addition, you'll notice that this is the beginning of the MVVM design pattern, within a nested MVC. In other words, the server publishes data changes, and treats those data changes as the "view" in its own MVC pattern. The client subscribes to those changes, and treats the changes as the "model" in its MVVM pattern. A code example of this is very simple inside of Meteor (although you can make it more complex and therefore more controlled if you'd like): var lists = new Meteor.Collection("lists"); What this one line does is declare that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server, and update its model accordingly. The server will publish changes, and listen to change requests from the client, and update its model (its master copy) based on those change requests. Wow. One line of code that does all that! Of course there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the Meteor documentation at http://docs.meteor.com/#publishandsubscribe. Templated HTML (the view) The Meteor client renders HTML through the use of templates. Templates in HTML are also called view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. The HTML code has a placeholder. In that placeholder different HTML code will be placed, depending on the value of a variable. If the value of that variable changes, the code in the placeholder will change with it, creating a different view. Let's look at a very simple data binding – one that you don't technically need Meteor for – to illustrate the point. In LendLib.html, you will see an HTML (Handlebar) template expression: <div id="categories-container"> {{> categories}} </div> That expression is a placeholder for an HTML template, found just below it: <template name="categories"> <h2 class="title">my stuff</h2>... So, {{> categories}} is basically saying "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will change the display, change the h2 tag to an h4 tag, and save the change: <template name="categories"> <h4 class="title">my stuff</h4>... You'll see the effect in your browser ("my stuff" become itsy bitsy). That's a template – or view data binding – at work! Change the h4 back to an h2 and save the change. Unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you!! Alright, now that we know what a view data binding is, let's see how Meteor uses them. Inside the categories template in LendLib.html, you'll find even more Handlebars templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group"> {{#each lists}} <div class="category btn btn-inverse"> {{Category}} </div> {{/each}} </div> </template> The first Handlebars expression is part of a pair, and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, make a new div) for each item in the lists collection. lists is the piece of data. {{#each lists}} is the placeholder. Now, inside the #each lists expression, there is one more Handlebars expression. {{Category}} Because this is found inside the #each expression, Category is an implied property of lists. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for each loop. So the placeholder is saying "Add the value of this.Category here." Now, if we look in LendLib.js, we will see the values behind the templates. Template.categories.lists = function () { return lists.find(... Here, Meteor is declaring a template variable named lists, found inside a template called categories. That variable happens to be a function. That function is returning all the data in the lists collection, which we defined previously. Remember this line? var lists = new Meteor.Collection("lists"); That lists collection is returned by the declared Template.categories.lists, so that when there's a change to the lists collection, the variable gets updated, and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection (the model). The template will see this change, and update the HTML code/placeholder. The for each loop will run one additional time, for the new entry in lists, and you'll see the following screen: In regards to the MVVM pattern, the HTML template code is part of the client's view. Any changes to the data are reflected in the browser automatically. Meteor's client code (the View-Model) As discussed in the preceding section, LendLib.js contains the template variables, linking the client's model to the HTML page, which is the client's view. Any logic that happens inside of LendLib.js as a reaction to changes from either the view or the model is part of the View-Model. The View-Model is responsible for tracking changes to the model and presenting those changes in such a way that the view will pick up the changes. It's also responsible for listening to changes coming from the view. By changes, we don't mean a button click or text being entered. Instead, we mean a change to a template value. A declared template is the View-Model, or the model for the view. That means that the client controller has its model (the data from the server) and it knows what to do with that model, and the view has its model (a template) and it knows how to display that model. Let's create some templates We'll now see a real-life example of the MVVM design pattern, and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long -t term solution. Let's make it so we can do that on the page instead. Open LendLib.html and add a new button just before the {{#each lists}} expression. <div id="categories" class="btn-group"> <div class="category btn btn-inverse" id="btnNewCat">+</div> {{#each lists}} This will add a plus button to the page. Now, we'll want to change out that button for a text field if we click on it. So let's build that functionality using the MVVM pattern, and make it based on the value of a variable in the template. Add the following lines of code: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} The first line {{#if new_cat}} checks to see if new_cat is true or false. If it's false, the {{else}} section triggers, and it means we haven't yet indicated we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will be false, and so the display won't change. Now let's add the HTML code to display, if we want to add a new category: <div id="categories" class="btn-group"> {{#if new_cat}} <div class="category"> <input type="text" id="add-category" value="" /> </div> {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} Here we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is, so for now it's hidden. So how do we make new_cat equal true? Save your changes if you haven't already, and open LendingLib.js. First, we'll declare a Session variable, just below our lists template declaration. Template.categories.lists = function () { return lists.find({}, {sort: {Category: 1}}); }; // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we declare the new template variable new_cat, which will be a function returning the value of adding_category: // We are declaring the 'adding_category' flag Session.set('adding_category', false); // This returns true if adding_category has been assigned a value //of true Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Save these changes, and you'll see that nothing has changed. Ta-daaa! In reality, this is exactly as it should be, because we haven't done anything to change the value of adding_category yet. Let's do that now. First, we'll declare our click event, which will change the value in our Session variable. Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Template.categories.events({ 'click #btnNewCat': function (e, t) { Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); } }); Let's take a look at the following line: Template.categories.events({ This line is declaring that there will be events found in the category template. Now let's take a look at the next line: 'click #btnNewCat': function (e, t) { This line tells us that we're looking for a click event on the HTML element with an id="btnNewCat" (which we already created on LendingLib.html). Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); We set the Session variable adding_category = true, we flush the DOM (clear up anything wonky), and then we set the focus onto the input box with the expression id="add-category". One last thing to do, and that is to quickly add the helper function focusText(). Just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //------closing bracket for if(Meteor.isClient){} Now when you save the changes, and click on the plus [ ] button, you'll see the following input box: Fancy! It's still not useful, but we want to pause for a second and reflect on what just happened. We created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. That variable belongs to the View-Model. That is to say that if we change the value of the variable (like we do with the click event), then the view automatically updates. We've just completed an MVVM pattern inside a Meteor application! To really bring this home, let's add a change to the lists collection (also part of the View-Model, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or to put it another way, we want to listen when the user types something in the box and hits Enter. When that happens, we want to have a category added, based on what the user typed. First, let's declare the event handler. Just after the click event for #btnNewCat, let's add another event handler: focusText(t.find("#add-category")); }, 'keyup #add-category': function (e,t){ if (e.which === 13) { var catVal = String(e.target.value || ""); if (catVal) { lists.insert({Category:catVal}); Session.set('adding_category', false); } } } }); We add a "," at the end of the click function, and then added the keyup event handler. if (e.which === 13) This line checks to see if we hit the Enter/return key. var catVal = String(e.target.value || ""); if (catVal) This checks to see if the input field has any value in it. lists.insert({Category:catVal}); If it does, we want to add an entry to the lists collection. Session.set('adding_category', false); Then we want to hide the input box, which we can do by simply modifying the value of adding_category. One more thing to add, and we're all done. If we click away from the input box, we want to hide it, and bring back the plus button. We already know how to do that inside the MVVM pattern by now, so let's add a quick function that changes the value of adding_category. Add one more comma after the keyup event handler, and insert the following event handler: Session.set('adding_category', false); } } }, 'focusout #add-category': function(e,t){ Session.set('adding_category',false); } }); Save your changes, and let's see this in action! In your web browser, on http://localhost:3000 , click on the plus sign — add the word Clothes and hit Enter. Your screen should now resemble the following: Feel free to add more categories if you want. Also, experiment with clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article you've learned about the history of web applications, and seen how we've moved from a traditional client/server model to a full-fledged MVVM design pattern. You've seen how Meteor uses templates and synchronized data to make things very easy to manage, providing a clean separation between our view, our view logic, and our data. Lastly, you've added more to the Lending Library, making a button to add categories, and you've done it all using changes to the View-Model, rather than directly editing the HTML.   Resources for Article : Further resources on this subject: How to Build a RSS Reader for Windows Phone 7 [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 3543

article-image-play-framework-data-validation-using-controllers
Packt
21 Jul 2011
15 min read
Save for later

Play Framework: Data Validation Using Controllers

Packt
21 Jul 2011
15 min read
Play Framework Cookbook Over 60 incredibly effective recipes to take you under the hood and leverage advanced concepts of the Play framework Introduction This article will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes. Always remember that controllers are really only a thin layer to ensure that your data from the outside world is valid before handing it over to your models, or something needs to be specifically adapted to HTTP. URL routing using annotation-based configuration If you do not like the routes file, you can also describe your routes programmatically by adding annotations to your controllers. This has the advantage of not having any additional config file, but also poses the problem of your URLs being dispersed in your code. You can find the source code of this example in the examples/chapter2/annotationcontroller directory. How to do it... Go to your project and install the router module via conf/dependencies.yml: require: - play - play -> router head Then run playdeps and the router module should be installed in the modules/ directory of your application. Change your controller like this: @StaticRoutes({ @ServeStatic(value="/public/", directory="public") }) public class Application extends Controller { @Any(value="/", priority=100) public static void index() { forbidden("Reserved for administrator"); } @Put(value="/", priority=2, accept="application/json") public static void hiddenIndex() { renderText("Secret news here"); } @Post("/ticket") public static void getTicket(String username, String password) { String uuid = UUID.randomUUID().toString(); renderJSON(uuid); } } How it works... Installing and enabling the module should not leave any open questions for you at this point. As you can see in the controller, it is now filled with annotations that resemble the entries in the routes.conf file, which you could possibly have deleted by now for this example. However, then your application will not start, so you have to have an empty file at least. The @ServeStatic annotation replaces the static command in the routes file. The @StaticRoutes annotation is just used for grouping several @ServeStatic annotations and could be left out in this example. Each controller call now has to have an annotation in order to be reachable. The name of the annotation is the HTTP method, or @Any, if it should match all HTTP methods. Its only mandatory parameter is the value, which resembles the URI—the second field in the routes. conf. All other parameters are optional. Especially interesting is the priority parameter, which can be used to give certain methods precedence. This allows a lower prioritized catchall controller like in the preceding example, but a special handling is required if the URI is called with the PUT method. You can easily check the correct behavior by using curl, a very practical command line HTTP client: curl -v localhost:9000/ This command should give you a result similar to this: > GET / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: Play! Framework;1.1;dev < Content-Type: text/html; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=0c7df945a5375480993f51914804284a3bb ca726-%00___ID%3A70963572-b0fc-4c8c-b8d5-871cb842c5a2%00;Path=/ < Cache-Control: no-cache < Content-Length: 32 < <h1>Reserved for administrator</h1> You can see the HTTP error message and the content returned. You can trigger a PUT request in a similar fashion: curl -X PUT -v localhost:9000/ > PUT / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 200 OK < Server: Play! Framework;1.1;dev < Content-Type: text/plain; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=f0cb6762afa7c860dde3fe1907e8847347 6e2564-%00___ID%3A6cc88736-20bb-43c1-9d43-42af47728132%00;Path=/ < Cache-Control: no-cache < Content-Length: 16 Secret news here As you can see now, the priority has voted the controller method for the PUT method which is chosen and returned. There's more... The router module is a small, but handy module, which is perfectly suited to take a first look at modules and to understand how the routing mechanism of the Play framework works at its core. You should take a look at the source if you need to implement custom mechanisms of URL routing. Mixing the configuration file and annotations is possible You can use the router module and the routes file—this is needed when using modules as they cannot be specified in annotations. However, keep in mind that this is pretty confusing. You can check out more info about the router module at http://www.playframework.org/modules/router. Basics of caching Caching is quite a complex and multi-faceted technique, when implemented correctly. However, implementing caching in your application should not be complex, but rather the mindwork before, where you think about what and when to cache, should be. There are many different aspects, layers, and types (and their combinations) of caching in any web application. This recipe will give a short overview about the different types of caching and how to use them. You can find the source code of this example in the chapter2/caching-general directory. Getting ready First, it is important that you understand where caching can happen—inside and outside of your Play application. So let's start by looking at the caching possibilities of the HTTP protocol. HTTP sometimes looks like a simple protocol, but is tricky in the details. However, it is one of the most proven protocols in the Internet, and thus it is always useful to rely on its functionalities. HTTP allows the caching of contents by setting specific headers in the response. There are several headers which can be set: Cache-Control: This is a header which must be parsed and used by the client and also all the proxies in between. Last-Modified: This adds a timestamp, explaining when the requested resource had been changed the last time. On the next request the client may send an If-Modified- Since header with this date. Now the server may just return a HTTP 304 code without sending any data back. ETag: An ETag is basically the same as a Last-Modified header, except it has a semantic meaning. It is actually a calculated hash value resembling the resource behind the requested URL instead of a timestamp. This means the server can decide when a resource has changed and when it has not. This could also be used for some type of optimistic locking. So, this is a type of caching on which the requesting client has some influence on. There are also other forms of caching which are purely on the server side. In most other Java web frameworks, the HttpSession object is a classic example, which belongs to this case. Play has a cache mechanism on the server side. It should be used to store big session data, in this case any data exceeding the 4KB maximum cookie size. Be aware that there is a semantic difference between a cache and a session. You should not rely on the data being in the cache and thus need to handle cache misses. You can use the Cache class in your controller and model code. The great thing about it is that it is an abstraction of a concrete cache implementation. If you only use one node for your application, you can use the built-in ehCache for caching. As soon as your application needs more than one node, you can configure a memcached in your application.conf and there is no need to change any of your code. Furthermore, you can also cache snippets of your templates. For example, there is no need to reload the portal page of a user on every request when you can cache it for 10 minutes. This also leads to a very simple truth. Caching gives you a lot of speed and might even lower your database load in some cases, but it is not free. Caching means you need RAM, lots of RAM in most cases. So make sure the system you are caching on never needs to swap, otherwise you could read the data from disk anyway. This can be a special problem in cloud deployments, as there are often limitations on available RAM. The following examples show how to utilize the different caching techniques. We will show four different use cases of caching in the accompanying test. First test: public class CachingTest extends FunctionalTest { @Test public void testThatCachingPagePartsWork() { Response response = GET("/"); String cachedTime = getCachedTime(response); assertEquals(getUncachedTime(response), cachedTime); response = GET("/"); String newCachedTime = getCachedTime(response); assertNotSame(getUncachedTime(response), newCachedTime); assertEquals(cachedTime, newCachedTime); } @Test public void testThatCachingWholePageWorks() throws Exception { Response response = GET("/cacheFor"); String content = getContent(response); response = GET("/cacheFor"); assertEquals(content, getContent(response)); Thread.sleep(6000); response = GET("/cacheFor"); assertNotSame(content, getContent(response)); } @Test public void testThatCachingHeadersAreSet() { Response response = GET("/proxyCache"); assertIsOk(response); assertHeaderEquals("Cache-Control", "max-age=3600", response); } @Test public void testThatEtagCachingWorks() { Response response = GET("/etagCache/123"); assertIsOk(response); assertContentEquals("Learn to use etags, dumbass!", response); Request request = newRequest(); String etag = String.valueOf("123".hashCode()); Header noneMatchHeader = new Header("if-none-match", etag); request.headers.put("if-none-match", noneMatchHeader); DateTime ago = new DateTime().minusHours(12); String agoStr = Utils.getHttpDateFormatter().format(ago. toDate()); Header modifiedHeader = new Header("if-modified-since", agoStr); request.headers.put("if-modified-since", modifiedHeader); response = GET(request, "/etagCache/123"); assertStatus(304, response); } private String getUncachedTime(Response response) { return getTime(response, 0); } private String getCachedTime(Response response) { return getTime(response, 1); } private String getTime(Response response, intpos) { assertIsOk(response); String content = getContent(response); return content.split("n")[pos]; } } The first test checks for a very nice feature. Since play 1.1, you can cache parts of a page, more exactly, parts of a template. This test opens a URL and the page returns the current date and the date of such a cached template part, which is cached for about 10 seconds. In the first request, when the cache is empty, both dates are equal. If you repeat the request, the first date is actual while the second date is the cached one. The second test puts the whole response in the cache for 5 seconds. In order to ensure that expiration works as well, this test waits for six seconds and retries the request. The third test ensures that the correct headers for proxy-based caching are set. The fourth test uses an HTTP ETag for caching. If the If-Modified-Since and If-None- Match headers are not supplied, it returns a string. On adding these headers to the correct ETag (in this case the hashCode from the string 123) and the date from 12 hours before, a 302 Not-Modified response should be returned. How to do it... Add four simple routes to the configuration as shown in the following code: GET / Application.index GET /cacheFor Application.indexCacheFor GET /proxyCache Application.proxyCache GET /etagCache/{name} Application.etagCache The application class features the following controllers: public class Application extends Controller { public static void index() { Date date = new Date(); render(date); } @CacheFor("5s") public static void indexCacheFor() { Date date = new Date(); renderText("Current time is: " + date); } public static void proxyCache() { response.cacheFor("1h"); renderText("Foo"); } @Inject private static EtagCacheCalculator calculator; public static void etagCache(String name) { Date lastModified = new DateTime().minusDays(1).toDate(); String etag = calculator.calculate(name); if(!request.isModified(etag, lastModified.getTime())) { throw new NotModified(); } response.cacheFor(etag, "3h", lastModified.getTime()); renderText("Learn to use etags, dumbass!"); } } As you can see in the controller, the class to calculate ETags is injected into the controller. This is done on startup with a small job as shown in the following code: @OnApplicationStart public class InjectionJob extends Job implements BeanSource { private Map<Class, Object>clazzMap = new HashMap<Class, Object>(); public void doJob() { clazzMap.put(EtagCacheCalculator.class, new EtagCacheCalculator()); Injector.inject(this); } public <T> T getBeanOfType(Class<T>clazz) { return (T) clazzMap.get(clazz); } } The calculator itself is as simple as possible: public class EtagCacheCalculator implements ControllerSupport { public String calculate(String str) { return String.valueOf(str.hashCode()); } } The last piece needed is the template of the index() controller, which looks like this: Current time is: ${date} #{cache 'mainPage', for:'5s'} Current time is: ${date} #{/cache} How it works... Let's check the functionality per controller call. The index() controller has no special treatment inside the controller. The current date is put into the template and that's it. However, the caching logic is in the template here because not the whole, but only a part of the returned data should be cached, and for that a #{cache} tag used. The tag requires two arguments to be passed. The for parameter allows you to set the expiry out of the cache, while the first parameter defines the key used inside the cache. This allows pretty interesting things. Whenever you are in a page where something is exclusively rendered for a user (like his portal entry page), you could cache it with a key, which includes the user name or the session ID, like this: #{cache 'home-' + connectedUser.email, for:'15min'} ${user.name} #{/cache} This kind of caching is completely transparent to the user, as it exclusively happens on the server side. The same applies for the indexCacheFor() controller. Here, the whole page gets cached instead of parts inside the template. This is a pretty good fit for nonpersonalized, high performance delivery of pages, which often are only a very small portion of your application. However, you already have to think about caching before. If you do a time consuming JPA calculation, and then reuse the cache result in the template, you have still wasted CPU cycles and just saved some rendering time. The third controller call proxyCache() is actually the most simple of all. It just sets the proxy expire header called Cache-Control. It is optional to set this in your code, because your Play is configured to set it as well when the http.cacheControl parameter in your application.conf is set. Be aware that this works only in production, and not in development mode. The most complex controller is the last one. The first action is to find out the last modified date of the data you want to return. In this case it is 24 hours ago. Then the ETag needs to be created somehow. In this case, the calculator gets a String passed. In a real-world application you would more likely pass the entity and the service would extract some properties of it, which are used to calculate the ETag by using a pretty-much collision-safe hash algorithm. After both values have been calculated, you can check in the request whether the client needs to get new data or may use the old data. This is what happens in the request.isModified() method. If the client either did not send all required headers or an older timestamp was used, real data is returned; in this case, a simple string advising you to use an ETag the next time. Furthermore, the calculated ETag and a maximum expiry time are also added to the response via response.cacheFor(). A last specialty in the etagCache() controller is the use of the EtagCacheCalculator. The implementation does not matter in this case, except that it must implement the ControllerSupport interface. However, the initialization of the injected class is still worth a mention. If you take a look at the InjectionJob class, you will see the creation of the class in the doJob() method on startup, where it is put into a local map. Also, the Injector.inject() call does the magic of injecting the EtagCacheCalculator instance into the controllers. As a result of implementing the BeanSource interface, the getBeanOfType() method tries to get the corresponding class out of the map. The map actually should ensure that only one instance of this class exists. There's more... Caching is deeply integrated into the Play framework as it is built with the HTTP protocol in mind. If you want to find out more about it, you will have to examine core classes of the framework. More information in the ActionInvoker If you want to know more details about how the @CacheFor annotation works in Play, you should take a look at the ActionInvoker class inside of it. Be thoughtful with ETag calculation Etag calculation is costly, especially if you are calculating more then the last-modified stamp. You should think about performance here. Perhaps it would be useful to calculate the ETag after saving the entity and storing it directly at the entity in the database. It is useful to make some tests if you are using the ETag to ensure high performance. In case you want to know more about ETag functionality, you should read RFC 2616. You can also disable the creation of ETags totally, if you set http.useETag=false in your application.conf. Use a plugin instead of a job The job that implements the BeanSource interface is not a very clean solution to the problem of calling Injector.inject() on start up of an application. It would be better to use a plugin in this case.
Read more
  • 0
  • 0
  • 3542

article-image-apache-solr-configuration
Packt
19 Feb 2013
17 min read
Save for later

Apache Solr Configuration

Packt
19 Feb 2013
17 min read
(For more resources related to this topic, see here.) During the writing of this article, I used Solr version 4.0 and Jetty Version 8.1.5. If another version of Solr is mandatory for a feature to run, then it will be mentioned. If you don't have any experience with Apache Solr, please refer to the Apache Solr tutorial which can be found at : http://lucene.apache.org/solr/tutorial.html. Running Solr on Jetty The simplest way to run Apache Solr on a Jetty servlet container is to run the provided example configuration based on embedded Jetty. But it's not the case here. In this recipe, I would like to show you how to configure and run Solr on a standalone Jetty container. Getting ready First of all you need to download the Jetty servlet container for your platform. You can get your download package from an automatic installer (such as, apt-get), or you can download it yourself from http://jetty.codehaus.org/jetty/ How to do it... The first thing is to install the Jetty servlet container, which is beyond the scope of this article, so we will assume that you have Jetty installed in the /usr/share/jetty directory or you copied the Jetty files to that directory. Let's start by copying the solr.war file to the webapps directory of the Jetty installation (so the whole path would be /usr/share/jetty/webapps). In addition to that we need to create a temporary directory in Jetty installation, so let's create the temp directory in the Jetty installation directory. Next we need to copy and adjust the solr.xml file from the context directory of the Solr example distribution to the context directory of the Jetty installation. The final file contents should look like the following code: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www. eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/solr</Set> <Set name="war"><SystemProperty name="jetty.home"/>/webapps/solr. war</Set> <Set name="defaultsDescriptor"><SystemProperty name="jetty.home"/>/ etc/webdefault.xml</Set> <Set name="tempDirectory"><Property name="jetty.home" default="."/>/ temp</Set> </Configure> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Now we need to copy the jetty.xml, webdefault.xml, and logging.properties files from the etc directory of the Solr distribution to the configuration directory of Jetty, so in our case to the /usr/share/jetty/etc directory. The next step is to copy the Solr configuration files to the appropriate directory. I'm talking about files such as schema.xml, solrconfig.xml, solr.xml, and so on. Those files should be in the directory specified by the solr.solr.home system variable (in my case this was the /usr/share/solr directory). Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. The last thing about the configuration is to update the /etc/default/jetty file and add –Dsolr.solr.home=/usr/share/solr to the JAVA_OPTIONS variable of that file. The whole line with that variable could look like the following: JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true -Dsolr.solr.home=/usr/ share/solr/" If you didn't install Jetty with apt-get or a similar software, you may not have the /etc/default/jetty file. In that case, add the –Dsolr.solr.home=/usr/share/solr parameter to the Jetty startup. We can now run Jetty to see if everything is ok. To start Jetty, that was installed, for example, using the apt-get command, use the following command: /etc/init.d/jetty start You can also run Jetty with a java command. Run the following command in the Jetty installation directory: java –Dsolr.solr.home=/usr/share/solr –jar start.jar If there were no exceptions during the startup, we have a running Jetty with Solr deployed and configured. To check if Solr is running, try going to the following address with your web browser: http://localhost:8983/solr/. You should see the Solr front page with cores, or a single core, mentioned. Congratulations! You just successfully installed, configured, and ran the Jetty servlet container with Solr deployed. How it works... For the purpose of this recipe, I assumed that we needed a single core installation with only I and solrconfig.xml configuration files. Multicore installation is very similar – it differs only in terms of the Solr configuration files. The first thing we did was copy the solr.war file and create the temp directory. The WAR file is the actual Solr web application. The temp directory will be used by Jetty to unpack the WAR file. The solr.xml file we placed in the context directory enables Jetty to define the context for the Solr web application. As you can see in its contents, we set the context to be /solr, so our Solr application will be available under http://localhost:8983/solr/ We also specified where Jetty should look for the WAR file (the war property), where the web application descriptor file (the defaultsDescriptor property) is, and finally where the temporary directory will be located (the tempDirectory property). The next step is to provide configuration files for the Solr web application. Those files should be in the directory specified by the system solr.solr.home variable. I decided to use the /usr/share/solr directory to ensure that I'll be able to update Jetty without the need of overriding or deleting the Solr configuration files. When copying the Solr configuration files, you should remember to include all the files and the exact directory structure that Solr needs. So in the directory specified by the solr.solr.home variable, the solr.xml file should be available – the one that describes the cores of your system. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be a cores tag (with the adminPath variable set to the address where Solr's cores administration API is available and the defaultCoreName attribute that says which is the default core). The cores tag is a parent for cores definition – each core should have its own cores tag with name attribute specifying the core name and the instanceDir attribute specifying the directory where the core specific files will be available (such as the conf directory). If you installed Jetty with the apt–get command or similar, you will need to update the /etc/default/jetty file to include the solr.solr.home variable for Solr to be able to see its configuration directory. After all those steps we are ready to launch Jetty. If you installed Jetty with apt–get or a similar software, you can run Jetty with the first command shown in the example. Otherwise you can run Jetty with a java command from the Jetty installation directory. After running the example query in your web browser you should see the Solr front page as a single core. Congratulations! You just successfully configured and ran the Jetty servlet container with Solr deployed. There's more... There are a few tasks you can do to counter some problems when running Solr within the Jetty servlet container. Here are the most common ones that I encountered during my work. I want Jetty to run on a different port Sometimes it's necessary to run Jetty on a different port other than the default one. We have two ways to achieve that: Adding an additional startup parameter, jetty.port. The startup command would look like the following command: java –Djetty.port=9999 –jar start.jar Changing the jetty.xml file – to do that you need to change the following line: <Set name="port"><SystemProperty name="jetty.port" default="8983"/></Set> To: <Set name="port"><SystemProperty name="jetty.port" default="9999"/></Set> Buffer size is too small Buffer overflow is a common problem when our queries are getting too long and too complex, – for example, when we use many logical operators or long phrases. When the standard head buffer is not enough you can resize it to meet your needs. To do that, you add the following line to the Jetty connector in the jetty.xml file. Of course the value shown in the example can be changed to the one that you need: <Set name="headerBufferSize">32768</Set> After adding the value, the connector definition should look more or less like the following snippet: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="port"><SystemProperty name="jetty.port" default="8080"/></ Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> <Set name="headerBufferSize">32768</Set> </New> </Arg> </Call> Running Solr on Apache Tomcat Sometimes you need to choose a servlet container other than Jetty. Maybe because your client has other applications running on another servlet container, maybe because you just don't like Jetty. Whatever your requirements are that put Jetty out of the scope of your interest, the first thing that comes to mind is a popular and powerful servlet container – Apache Tomcat. This recipe will give you an idea of how to properly set up and run Solr in the Apache Tomcat environment. Getting ready First of all we need an Apache Tomcat servlet container. It can be found at the Apache Tomcat website – http://tomcat.apache.org. I concentrated on the Tomcat Version 7.x because at the time of writing of this book it was mature and stable. The version that I used during the writing of this recipe was Apache Tomcat 7.0.29, which was the newest one at the time. How to do it... To run Solr on Apache Tomcat we need to follow these simple steps: Firstly, you need to install Apache Tomcat. The Tomcat installation is beyond the scope of this book so we will assume that you have already installed this servlet container in the directory specified by the $TOMCAT_HOME system variable. The second step is preparing the Apache Tomcat configuration files. To do that we need to add the following inscription to the connector definition in the server.xml configuration file: URIEncoding="UTF-8" The portion of the modified server.xml file should look like the following code snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> The third step is to create a proper context file. To do that, create a solr.xml file in the $TOMCAT_HOME/conf/Catalina/localhost directory. The contents of the file should look like the following code: <Context path="/solr" docBase="/usr/share/tomcat/webapps/solr.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/ usr/share/solr/" override="true"/> </Context> The next thing is the Solr deployment. To do that we need the apache-solr-4.0.0.war file that contains the necessary files and libraries to run Solr that is to be copied to the Tomcat webapps directory and renamed solr.war. The one last thing we need to do is add the Solr configuration files. The files that you need to copy are files such as schema.xml, solrconfig.xml, and so on. Those files should be placed in the directory specified by the solr/home variable (in our case /usr/share/solr/). Please don't forget that you need to ensure the proper directory structure. If you are not familiar with the Solr directory structure please take a look at the example deployment that is provided with the standard Solr package. Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/ conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. Now we can start the servlet container, by running the following command: bin/catalina.sh start In the log file you should see a message like this: Info: Server startup in 3097 ms To ensure that Solr is running properly, you can run a browser and point it to an address where Solr should be visible, like the following: http://localhost:8080/solr/ If you see the page with links to administration pages of each of the cores defined, that means that your Solr is up and running. How it works... Let's start from the second step as the installation part is beyond the scope of this book. As you probably know, Solr uses UTF-8 file encoding. That means that we need to ensure that Apache Tomcat will be informed that all requests and responses made should use that encoding. To do that, we modified the server.xml file in the way shown in the example. The Catalina context file (called solr.xml in our example) says that our Solr application will be available under the /solr context (the path attribute). We also specified the WAR file location (the docBase attribute). We also said that we are not using debug (the debug attribute), and we allowed Solr to access other context manipulation methods. The last thing is to specify the directory where Solr should look for the configuration files. We do that by adding the solr/home environment variable with the value attribute set to the path to the directory where we have put the configuration files. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be the cores tag (with the adminPath variable set to the address where the Solr cores administration API is available and the defaultCoreName attribute describing which is the default core). The cores tag is a parent for cores definition – each core should have its own core tag with a name attribute specifying the core name and the instanceDir attribute specifying the directory where the core-specific files will be available (such as the conf directory). The shell command that is shown starts Apache Tomcat. There are some other options of the catalina.sh (or catalina.bat) script; the descriptions of these options are as follows: stop: This stops Apache Tomcat restart: This restarts Apache Tomcat debug: This start Apache Tomcat in debug mode run: This runs Apache Tomcat in the current window, so you can see the output on the console from which you run Tomcat. After running the example address in the web browser, you should see a Solr front page with a core (or cores if you have a multicore deployment). Congratulations! You just successfully configured and ran the Apache Tomcat servlet container with Solr deployed. There's more... There are some other tasks that are common problems when running Solr on Apache Tomcat. Changing the port on which we see Solr running on Tomcat Sometimes it is necessary to run Apache Tomcat on a different port other than 8080, which is the default one. To do that, you need to modify the port variable of the connector definition in the server.xml file located in the $TOMCAT_HOME/conf directory. If you would like your Tomcat to run on port 9999, this definition should look like the following code snippet: <Connector port="9999" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> While the original definition looks like the following snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> Installing a standalone ZooKeeper You may know that in order to run SolrCloud—the distributed Solr installation–you need to have Apache ZooKeeper installed. Zookeeper is a centralized service for maintaining configurations, naming, and provisioning service synchronization. SolrCloud uses ZooKeeper to synchronize configuration and cluster states (such as elected shard leaders), and that's why it is crucial to have a highly available and fault tolerant ZooKeeper installation. If you have a single ZooKeeper instance and it fails then your SolrCloud cluster will crash too. So, this recipe will show you how to install ZooKeeper so that it's not a single point of failure in your cluster configuration. Getting ready The installation instruction in this recipe contains information about installing ZooKeeper Version 3.4.3, but it should be useable for any minor release changes of Apache ZooKeeper. To download ZooKeeper please go to http://zookeeper.apache.org/releases.html This recipe will show you how to install ZooKeeper in a Linux-based environment. You also need Java installed. How to do it... Let's assume that we decided to install ZooKeeper in the /usr/share/zookeeper directory of our server and we want to have three servers (with IP addresses 192.168.1.1, 192.168.1.2, and 192.168.1.3) hosting the distributed ZooKeeper installation. After downloading the ZooKeeper installation, we create the necessary directory: sudo mkdir /usr/share/zookeeper Then we unpack the downloaded archive to the newly created directory. We do that on three servers. Next we need to change our ZooKeeper configuration file and specify the servers that will form the ZooKeeper quorum, so we edit the /usr/share/zookeeper/conf/ zoo.cfg file and we add the following entries: clientPort=2181 dataDir=/usr/share/zookeeper/data tickTime=2000 initLimit=10 syncLimit=5 server.1=192.168.1.1:2888:3888 server.2=192.168.1.2:2888:3888 server.3=192.168.1.3:2888:3888 And now, we can start the ZooKeeper servers with the following command: /usr/share/zookeeper/bin/zkServer.sh start If everything went well you should see something like the following: JMX enabled by default Using config: /usr/share/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED And that's all. Of course you can also add the ZooKeeper service to start automatically during your operating system startup, but that's beyond the scope of the recipe and the book itself. How it works... Let's skip the first part, because creating the directory and unpacking the ZooKeeper server there is quite simple. What I would like to concentrate on are the configuration values of the ZooKeeper server. The clientPort property specifies the port on which our SolrCloud servers should connect to ZooKeeper. The dataDir property specifies the directory where ZooKeeper will hold its data. So far, so good right ? So now, the more advanced properties; the tickTime property specified in milliseconds is the basic time unit for ZooKeeper. The initLimit property specifies how many ticks the initial synchronization phase can take. Finally, the syncLimit property specifies how many ticks can pass between sending the request and receiving an acknowledgement. There are also three additional properties present, server.1, server.2, and server.3. These three properties define the addresses of the ZooKeeper instances that will form the quorum. However, there are three values separated by a colon character. The first part is the IP address of the ZooKeeper server, and the second and third parts are the ports used by ZooKeeper instances to communicate with each other.
Read more
  • 0
  • 0
  • 3533

article-image-configuring-opencms-search
Packt
21 Oct 2009
6 min read
Save for later

Configuring OpenCms Search

Packt
21 Oct 2009
6 min read
A Quick Overview of Lucene Included with OpenCms is a distribution of the Lucene search engine. Lucene is an open source, high-performance text search engine that is both easy to use and full-featured. Lucene is not a product. It is a Java library providing data indexing, and search and retrieval support. OpenCms integrates with Lucene to provide these features for its VFS content. Though Lucene is simple to use, it is highly flexible and has many options. We will not go into the full details of all the options here, but will provide a basic overview, which will help us in developing our search code. A full understanding of Lucene is not required for completing this article, but interested readers can find more information at the Lucene website: http://jakarta.apache.org/lucene. There are also several excellent books available, which can easily be found with a web search. Search Indexes For any data to be searched, it must first be indexed. Lucene supports both disk and memory based indexes, but OpenCms uses the more suitable disk based indexes. There are three basic concepts to understand regarding Lucene search indexes: Documents, Analyzers, and Fields. Document: A document is a collection of Lucene fields. A search index is made up of documents. Although each document is built from some actual source content, there is no need for the document to exactly resemble it. The fields stored in the document are indexed and stored and used to locate the document. Analyzer: An analyzer is responsible for breaking down source content into words (or terms) for indexing. An analyzer may take a very simple approach of only parsing content at whitespace breaks or a more complex approach by removing common words, identifying email and web addresses, and understanding abbreviations or other languages. Though Lucene provides many optional analyzers, the default one used by OpenCms is usually the best choice. For more advanced search applications, the other analyzers should be looked at in more depth. Field: A field consists of data that can be stored, indexed, or queried. Field values are searched when a query is made to the index. There are two characteristics of a field that determine how it gets treated when indexed: Field Storage: The storage characteristic of a field determines whether or not the field data value gets stored into the index. It is not necessary to store field data if the value is unimportant and is used only to help locate a document. On the other hand, field data should be stored if the value needs to be returned with the search result. Field Indexing: This characteristic determines whether a field will get indexed, and if so, how. There is no need to index fields that will not be used as search terms, and the value should not be indexed. This is useful if we need to return a field value but will never search for the document using that field in a search term. However, for fields that are searchable, the field may be indexed in either a tokenized or an un-tokenized fashion. If a field is tokenized, then it will first be run through an analyzer. Each term generated by the analyzer will be indexed for the field. If it is un-tokenized, then the field's value is indexed, verbatim. In this case, the term must be searched for using an exact match of its value, including the case. The two field types may be combined to form four combinations. While choosing a field type, consideration should thus be given to how the item will need to be located, as well as what data will need to be returned from the index. Lucene also provides the ability to define a boost value for a field. This affects the relevance of the field when it is used in a search. A value other than the default value of 1.0 may be used to raise or lower the relevance. These are the important concepts to be understood while creating a Lucene search index. After an index has been created, documents may be searched through queries. Search Queries Querying Lucene search indexes is supported through a Java API and a search querying language. Search queries are made up of terms and operators. A term can be a simple word such as "hello" or a phrase such as "hello world". Operators are used to form logical expressions with terms, such as AND or NOT. With the Java API, terms can be built and aggregated together along with operators to form a query. When using the query language, a Java class is provided to parse the query and convert it into a format suitable for passing to the engine. In addition to these search features, there are more advanced operations that may be performed such as fuzzy searches, range searches, and proximity searches. All these options and flexibility allow Lucene to be used in an application in many ways. OpenCms does a good job of using these options to provide search capabilities for a wide range of content types. Next, we will look at how OpenCms interfaces with Lucene to provide this support. Configuring OpenCms Search OpenCms maintains search settings in the opencms-search.xml configuration file located in the WEB-INF/config directory. Prior to OpenCms 7, most of the settings in this configuration file needed to be made by hand. With OpenCms 7, the Search Management tool in the Administration View has been improved to cover most of the settings. We will first go over the settings that are controlled through the Search Management view, and will then visit the settings that must still be changed by hand. The first thing we'll do is define our own search index for the blog content. Creating a new search index is simple with the Administration tool. We access it by clicking on the Search Management icon of the Administrative View, and then clicking on the New Index icon: The Name field contains the name of the index file. This name can also be passed to a Java API. If the content differs between the online and offline areas, we can create an index for each one. For now, we will start with the offline index. We'll name it: Blogs – Offline. The other fields are: Rebuild mode: This determines if the index is to be built manually or automatically as content changes. We want automatic updating and will hence choose auto. Locale: We must select a locale for the content. OpenCms will extract the content for the given locale when it builds our index. If we were supporting more than one locale, then it would be a good idea to include the locale in the index name. Project: This selects content from either the Online or Offline project. Field configuration: This selects a field configuration to be used for the index. We do not have our own field configuration yet; so for now press OK to save the index. Next, we will define a field configuration for the blog content.
Read more
  • 0
  • 0
  • 3529

article-image-manually-translating-your-joomla-sites-content-your-desired-language
Packt
28 Oct 2010
5 min read
Save for later

Manually Translating Your Joomla! Site's Content into Your Desired Language

Packt
28 Oct 2010
5 min read
Joomla! 1.5 Top Extensions Cookbook Over 80 great recipes for taking control of Joomla! Extensions Set up and use the best extensions available for Joomla! Covers extensions for just about every use of Joomla! Packed with recipes to help you get the most of the Joomla! extensions Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible        The reader would benefit from the previous article on Joomla! 1.5 Top Extensions for Using Languages. Getting ready... Joom!Fish is the most popular extension for building multilingual Joomla! websites. Download the latest version of Joom!Fish from http://joomlacode.org/gf/download/frsrelease/11315/45280/JoomFish2.0.4.zip, and install it from the Extensions | Install/Uninstall screen. It installs one component, two modules, and several plugins. How to do it... After installation, carry out the following steps: From the Joomla! administration panel, click on Components | Joom!Fish | Control Panel. This shows the Joom!Fish :: The multilingual Content Manager for Joomla! screen. Click on Language Configuration. This shows the Joom!Fish Language Manager screen, and lists all the installed languages. In the Active column, enable the checkboxes to activate the required languages. If you don't see an image for a language, type the image's URL in the Image filename field. Then click the icon displayed in the Config column. This shows the Joom!Fish Language Manager - Translate Configuration screen. In this screen, you can translate some common phrases, for example Offline Message, Site Name, Global Site Meta Description, Global Site Meta Keywords, a help site URL, mail settings, and so on. Type in the translations and click on the Save button in the toolbar. Now click on Translation, select Bengali in the Languages drop-down list, and select Categories in the Content elements drop-down list. This shows the translatable categories. Click on a category name and you should see the Translate screen, with the original text and a textbox to insert your translation. Type your translation in the Translate fields, enable the Published checkbox and then click on the Save button in the toolbar. Follow the same process for translating other categories. When finished translating all categories, select Contents in the Content elements drop-down list on the Translate screen. This shows the list of available articles for translation. Click an article title to translate. This shows the Translate screen with the original text and textboxes for translation. Type the translations in the Translation fields, enable the Published checkbox, and click on the Save button in the toolbar. Similarly, change types in the Content elements drop-down box and translate other content including Modules, Menus, Contacts, Banners, and so on. When finished translating, click on Extensions | Module Manager. This shows the Module Manager screen, listing the installed modules. From the list, click on the Language Selection module. This shows the Module: [Edit] screen: Select Yes in the Published field and select a module position from the Position drop-down list. From the Module Parameters section, in the Appearance of language selector drop-down list select how you want to display the language selection box. You can choose from Drop down of names, Drop down of names with current language flag, ul-list of names, ul-list of names with flag, ul-list of images, and Raw display of images. Preview the site's frontend and you should see the site in the default language, with the language selection box at the specified position. From the language selection module, click another case, in my case Bangla, to show the site content in that language. Visitors to your site can now switch to any active language through this language selection module. Note that the URL of the site now appends language code, for example, http://www.yourjoomlasite.com/index.php?lang=bn, where bn stands for the Bangla language. There's more... Note that in Joom!Fish, you can translate almost anything—articles, modules, menus, sections, categories, and so on. These translations are done through content elements. You can see any component or module by clicking on Components | Joom!Fish | Content Elements. You can download content elements for new extensions from http://extensions.joomla.org/extensions/extension-specific/joomfish-extensions and http://joomlacode.org/gf/project/joomfish/frs/. After downloading content elements, click on the Install button on Content Elements screen. This shows the Joom!Fish::Content Element Installer screen. Click on the Browse button, select the content element file, and then click on the Upload File & Install button. This installs the content and you can translate the content for that particular component or module. Summary This article covered: Manually translating your site's content into your desired language Further resources on this subject: Adding an Event Calendar to your Joomla! Site using JEvents Showing your Google calendar on your Joomla! site using GCalendar Joomla! 1.5 Top Extensions: Adding a Booking System for Events Joomla! 1.5 Top Extensions for Using Languages
Read more
  • 0
  • 0
  • 3524
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-audio
Packt
23 Oct 2013
9 min read
Save for later

Working with Audio

Packt
23 Oct 2013
9 min read
(For more resources related to this topic, see here.) Planning the audio In Camtasia Studio, we can stack multiple audio tracks on top of each other. While this is a useful and powerful way to build a soundtrack, it can lead to a cluttered audio output if we do not plan ahead. Audio tracks can be used for a wide range of purposes. It's best to storyboard audio to avoid creating a confusing audio mix. If we consider how each audio track will be used before we begin to overlay each file on the timeline, we can visualize the end result and resist the temptation to layer too many audio effects on top of each other. The importance of consistency Producing professional video in Camtasia Studio comes down to consistency and detail. The more consistent we are, the more professional the result will be. The more we pay attention to detail, the more professional the result is. By being consistent in our use of audio effects, we can avoid creating unintentional distractions or misleading the viewer. For example, if we choose to use a ping sound to represent a mouse click, we should make sure that all mouse clicks use the same ping sound so that the viewer understands and associates the sound with the action. A note on background music When deciding what audio we want in our video, we should always think about our target audience and the type of message we are trying to deliver. Never use background music unless it adds to the video content. For example, background music can be a useful way of engaging our viewer, but if we are delivering an important health and safety message, or delivering a quiz, a backing track may be distracting. If our audience are the staff in customer-facing departments, we may not want to include audio tracks at all. We wouldn't want the sound from our videos to be audible to a customer. Types of audio There are three main types of audio we can add to our video: Voice-over tracks Background music Sound effects Preparing to record a voice-over Various factors affect the quality and consistency of voice-over recordings. In Camtasia Studio, we can add effects but it's best to get the source audio right in the first instance. The factors are given as follows: We often don't pay attention to the qualities and tones in our own voices, but they can and do change. From day to day, your tone of voice can subtly change. Air temperature, illness, or mood can affect the way your voice sounds in a recording. In addition, the environment we use to record a voice-over can have a dramatic effect on the end result. Some rooms will give your voice natural reverb; others will sound very dead. The equipment we use will affect the recording. For example, different microphones will produce different results. When we prepare for a voice-over recording, we must aim to keep our voice, environment, and equipment as stable and consistent as possible. That means we should aim to record the voice-over in one session so that we can control all these factors. We may choose a different person to provide the voice-over. Again, we should take a consistent approach in how we use their voice. Voice-over recording is always a long process and involves trial, error, and multiple takes. We should allow more time than we feel is strictly necessary. Many recordings inevitably overrun. If any sections of the recording are questionable, we should aim to record all of the alternatives in the same session for a seamless result. The studio environment Most Camtasia Studio users do not have access to a professional recording studio. This need not be a problem. We can use practically any quiet room to record our voice-over, although there are some basic pointers that will improve the result. When choosing a studio location, consider the following: Ambient noise: Try to record in quiet environment. If we can use an empty room where there are no passers by or devices making any noise, this will make our recording clearer. Choose a room away from potential sources of noise (busy corridors, main roads, and so on). Noise leakage: Ensure that any doors and windows are closed to minimize noise pollution from outside the room and outside the building. Equipment noise: Ensure that all unnecessary programs on the PC are closed to prevent any unwanted sounds or alerts. End any background tasks, such as email checkers or task schedulers, and ensure any instant messaging software is closed or in offline mode. Positioning: Experiment with placing the microphone in different places around the room. The acoustics of a room can greatly affect the quality of a recording and taking time to find the best place for the microphone will help. For efficiency, we can test the audio quality quickly by wearing headphones while speaking into the microphone. Consider posture: Standing up opens up the diaphragm and improves the sound of our voice when we record. Avoid recording while seated, and hold any notes or papers at eye level to maintain a constant tone. Using scripts When it comes to voice-over recording, a well-prepared script is the most important piece of preparation we can do. Working from a script is far simpler than attempting to make up our narration as we go along. It helps to maintain a good pace in the video and greatly reduces the need for multiple takes, making recording far more efficient. Creating a script need not be time-consuming. If we have already planned out and recorded our video track, writing a script will be far simpler. Writing an effective script The script you write should support the action in the video and maintain a healthy pace. There are a number of tips we can bear in mind to do this. These tips are given as follows: Sync audio with video: Plan the script to coincide with any actions we take in the video. This may mean incorporating pauses into the script to allow a certain on-screen action to complete. Be flexible: We may need to go back and lengthen a section of video to incorporate the voice-over and explanation. It is better to do this than rush the voice-over and attempt to force it to fit. Use basic copywriting techniques: We should consider the message in the video and use the appropriate style. For example, if we are describing a process, we would want to use the active voice. In an internal company update, we may want to adopt a more conversational tone. Be direct and concise: A short and simple statement is far easier to process than a long, drawn out argument. We should always test our script prior to the recording session. We should also be prepared to re-write and hone the content. Reading a script aloud is a useful way of estimating its length and picking out any awkward phrases that do not flow. We will save time if we perfect the script before we sit down in front of the microphone. Recording equipment Most laptop computers have a built in microphone, as do some desktop computers. While these microphones are perfectly adequate for video or audio chats and other casual uses, we should not use them to create Camtasia Studio recordings. Although the quality may be good, and the audio may be clear, these microphones often pick up a large amount of ambient noise, such as the fans inside the computer. Additionally, the audio captured using built-in microphones often require processing and amplification, which can degrade its quality. Camtasia Studio has a range of editing tools that can help you to tweak your audio recording. However, processing should always be a last resort. The more we use a tool to process our voice-over, the more the source material is prone to being distorted. If we have better quality source material, we will not need to rely on these features; this will make the editing process much simpler. When working in Camtasia Studio, it is preferable to invest in a good quality external microphone. Basic microphones are inexpensive and offer considerably better audio recording than built-in microphones. Choosing a microphone External microphones are very affordable. Unless you have specific need for a professional-standard microphone, we recommend a USB microphone. Many of these microphones are sold as podcasting microphones and are perfectly adequate for use in Camtasia Studio. There are two main types of external microphone: Consider a lapel microphone if you plan to operate the computer as you record or present to the camera while you are speaking. Lapel microphones clip on to your clothing and leave your hands free. If you are more comfortable working at a desk, a microphone with a sturdy tripod stand will be a good investment. An external microphone with built in noise cancellation can give us a degree of control at the recording stage, rather than having to edit out noise later. A good stand will give us a greater degree of flexibility when it comes to microphone placement. How to set up an external microphone We can set up the external microphone before we begin recording by following the given steps: Navigate to Tools | Voice Narration. The Voice Narration screen is displayed. Click on Audio setup wizard.... The Audio Setup Wizard screen is displayed. Select the Audio device, as shown in the following screenshot. Summary In this article, we have looked at a range of ways to improve the quality of the audio in our Camtasia Studio projects. We have considered voice-over recording techniques, equipment, editing, sound effects, and background music. Resources for Article: Further resources on this subject: Editing attributes [Article] Basic Editing [Article] Video Editing in Blender using Video Sequence Editor: Part 1 [Article]
Read more
  • 0
  • 0
  • 3523

article-image-null-3
Packt
12 Oct 2011
13 min read
Save for later

ASP.NET 3.5 CMS: Master Pages, Themes, and Menus

Packt
12 Oct 2011
13 min read
Master Pages Earlier you were introduced to a feature called Master Pages, but what exactly are they? The idea behind them is the one that's been around since the early days of development. The idea that you can inherit the layout of one page for use in another is the one that has kept many developers scrambling with Includes and User Controls. This is where Master Pages come into play. They allow you to lay out a page once and use it over and over. By doing this, you can save yourself countless hours of time, as well as being able to maintain the look and feel of your site from a single place. By implementing a Master Page and using ContentPlaceHolders, your page is able to keep its continuity throughout. You'll see on the Master Page (SimpleCMS.master) that it looks similar to a standard .aspx page from ASP.NET, but with some slight differences. The <@...> declaration has had the page identifier changed for a Master declaration. Here is a standard web page declaration: <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master"AutoEventWireup="false" CodeFile="Default.aspx.vb"Inherits="_Default" Title="Untitled Page" %> Here is the declaration for a Master Page: <%@ Master Language="VB" CodeFile="SimpleCMS.master.vb"Inherits="SimpleCMS" %> This tells the underlying ASP.NET framework how to handle this special page. If you look at the code for the page, you will also see that it inherits from System.Web.UI.MasterPage instead of the standard System.Web.UI.Page. They function similarly but, as we will cover in more detail later, they have a few distinct differences. Now, back to the Master Page. Let's take a closer look at the two existing ContentPlaceHolders. The first one you see on the page is the one with the ID of "Head". This is a default item that is added automatically to a new Master Page and its location is also standard. The system is setting up your page so that any "child" page later on will be able to put things such as Javascript and style tags into this location. It's within the HTML <head> tag, and is handled by the client's browser specially. The control's tag contains a minimal amount of properties-in reality only four, along with a basic set of events you can tie to. The reason for this is actually pretty straightforward - it doesn't need anything more. The ContentPlaceHolder controls aren't really meant to do much, from a programming standpoint. They are meant to be placeholders where other code is injected, from the child pages, and this injected code is where all the "real work" is meant to take place. With that in mind, the system acts more as a pass-through to allow the ContentPlaceHolders to have as little impact on the rest of the site as possible. Now, back to the existing page, you will see the second preloaded ContentPlaceHolder (ContentPlaceHolder1). Again, this one will be automatically added to the new Master Page when it's initially added. Its position is really more of just being "thrown on the page" when you start out. The idea is that you will position this one, as well as any others you add to the page, in such a way as to complement the design of your site. You will typically have one for every zone or region of your layout, to allow you to update the contents within. For simplicity sake, we'll keep with the one zone approach to the site, and will only use the two existing preloaded ContentPlaceHolders for now at least. The positioning of ContentPlaceHolder1 in the current layout is one where it encapsulates the main "body" for the site. All the child pages will render their content up into this section. With that, you will notice the fact that the areas outside this control are really important to the way the site will not only look but also act. Setting up your site headers (images, menus, and so on) will be of the utmost importance. Also, things such as footers, borders, and all the other pieces you will interact with on each page are typically laid out on your Master Page. In the existing example, you will also see the LoginStatus1 control placed directly on the Master Page. This is a great way to share that control and any code/events you may have tied to it, on every page, without having to duplicate your code. There are a few things to keep in mind when putting things together on your Master Page. The biggest of which is that your child/content page will inherit aspects of your Master Page. Styles, attributes, and layout are just a few of the pieces you need to keep in mind. Think of the end resulting page as more of a merger of the Master Page and child/content page. With that in mind, you can begin to understand that when you add something such as a width to the Master Page, which would be consumed by the children, the Child Page will be bound by that. For example, when many people set up their Master Page, they will often use a <table> as their defining container. This is a great way to do this and, in fact, is exactly what's done in the example we are working with. Look at the HTML for the Master Page. You will see that the whole page, in essence, is wrapped in a <table> tag and the ContentPlaceHolder is within a <td>. If you were to happen to apply a style attribute to that table and set its width, the children that fill the ContentPlaceHolder are going to be restricted to working within the confines of that predetermined size. This is not necessarily a bad thing. It will make it easier to work with the child pages in that you don't have to worry about defining their sizes-it's already done for you, and at the same time, it lets you handle all the children from this one location. It can also restrict you for those exact same reasons. You may want a more dynamic approach, and hard setting these attributes on the Master Page may not be what you are after. These are factors you need to think about before you get too far into the designing of your site. Now that you've got a basic understanding of what Master Pages are and how they can function on a simple scale, let's take a look at the way they are used from the child/content page. Look at the Default.aspx (HTML view). You will notice that this page looks distinctly different from a standard (with no Master Page) page. Here you have what a page looks like when you first add it, with no Master Page: <%@ Page Language="VB" AutoEventWireup="false"CodeFile="Default2.aspx.vb" Inherits="Default2" %><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head runat="server"> <title>Untitled Page</title></head><body> <form id="form1" runat="server"> <div> </div> </form></body></html> Compare this to a new Web Form when you select a Master Page. <%@ Page Language="VB" MasterPageFile="~/SimpleCMS.master" AutoEventWireup="false" CodeFile="Default2.aspx.vb" Inherits="Default2" title="Untitled Page" %><asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"></asp:Content> You will see right away that all the common HTML tags are missing from the page with a Master Page selected. That's because all of these common pieces are being handled in the Master Page and will be rendered from the Master Page. You will also notice that the page with a Master Page also has an additional default attribute added to its page declaration. The title attribute is added so that, when merged and rendered with the Master Page, the page will get the proper title displayed. In addition to the declaration tag differences and the lack of the common HTML tags being absent, the two ContentPlaceHolder tags we defined on the Master Page are automatically referenced through the use of a Content control. These Content controls tie directly to the ContentPlaceHolder tags on the Master Page through the ContentPlaceHolderID attribute. This tells the system where to put the pieces when rendering. The basic idea is that anything between the opening and closing tags of the Content control will be rendered out to the page when being called from a browser. Themes Themes are an extension of another idea, like Master Pages, that has kept developers working long hours. How do you quickly change the look and feel of your site for different users or usages? This is where Themes come in. Themes can be thought of as a container where you store your style sheets, images, and anything else that you may want to interchange in the visual pieces of your site. Themes are folders where you put all of these pieces to group them together. While one user may be visiting your site and seeing it one way, another user can be viewing the exact same site, but get a completely different experience. Let's start off by enabling our site to include the use of Themes. To do this, right-click on the project in the Solutions Explorer, select Add ASP.NET Folder, and then choose Theme from the submenu: The folder will default to Theme1 as its name. I'd suggest that you name this something friendlier though. For now, we will call the Theme as "SimpleCMSTheme". However, later on you may want to add another Theme and give your folders descriptive names, which will really help you keep your work organized. You will see that a Theme is really nothing more than a folder for organizing all the pieces. Let's take a look at what options are available to us. Right-click on the SimpleCMSTheme folder we just created, select Add New Item, and you should see a list similar to this one: Your items may vary depending on your installation, but the key items here are Skin File and Style Sheet. You may already be familiar with stylesheets if you've done any web design work, but let's do a little refresher just in case. Stylesheets, among other uses, are a way to organize all the attributes for your HTML tags. This is really the key feature of stylesheets. You will often see them referenced and called CSS, which stands for Cascading Style Sheets that I'll explain in more detail shortly, but it's also the file extension used when adding a stylesheet to your application. Let's go ahead and add Style Sheet to our site just like the example above. For our example, we'll use the default name StyleSheet.css that the system selects. The system will preload your new stylesheet with one element-the body{} element. Let's go ahead and add a simple attribute to this element. Put your cursor between the open "{" and close "}" brackets and press Ctrl+space and you should get the IntelliSense menu. This is a list of the attributes that the system acknowledges for addition to your element tag. For our testing, let's select the background-color attribute and give it a value of Blue. It should look like this when you are completed: body {background-color: Blue;} Go ahead, save your stylesheet, run the site, and see what happens. If you didn't notice any difference, that's because even though we've now created a Theme for the site and added an attribute to the body element, we've never actually told the site to use this new Theme. Open your web.config and find the <pages…> element. It should be located in the <configuration><system.web>  section, as shown next: Go ahead, select the <pages> element, and put your cursor right after the "s". Press the spacebar and the IntelliSense menu should show up like this: You will see a long list of available items, but the item we are interested in for now is the theme. Select this and you will be prompted to enter a value. Put in the name of the Theme we created earlier. <pages theme="SimpleCMSTheme"> We've now assigned this Theme to our site with one simple line of text. Save your changes and let's run the site again and see what happens. The body element we added to our stylesheet is now read by the system and applied appropriately. View the source on your page and look at how this code was applied. The following line is now part of your rendered code: <link href="App_Themes/SimpleCMSTheme/StyleSheet.css" type="text/css" rel="stylesheet" /> Now that we've seen how to apply a Theme and how to use a stylesheet within it, let's look at one of the other key features of the Theme, the Skin file. A Skin file can be thought of as pre-setting a set of parameters for your controls in your site. This will let you configure multiple attributes, in order to give a certain look and feel to a control so that you can quickly reuse it at any time. Let's jump right in and take a look at how it works, to give you a better understanding. Right-click on the SimpleCMSTheme folder we created and select the Skin File option. Go ahead and use the defaulted name of SkinFile.skin for this example. You should get an example like this: <%--Default skin template. The following skins are provided as examples only.1. Named control skin. The SkinId should be uniquely defined becauseduplicate SkinId's per control type are not allowed in the same theme.<asp:GridView runat="server" SkinId="gridviewSkin" BackColor="White" > <AlternatingRowStyle BackColor="Blue" /></asp:GridView>2. Default skin. The SkinId is not defined. Only one defaultcontrol skin per control type is allowed in the same theme.<asp:Image runat="server" ImageUrl="~/images/image1.jpg" />--%> We now have the default Skin file for our site. Microsoft even provided a great sample here for us. What you see in the example could be translated to say that any GridView added to the site, with either no SkinID specified or with a SkinID of gridviewSkin, will use this skin. In doing so, these GridViews will all use a BackColor of White and AlternatingRowsStyle BackColor of Blue. By putting this in a Skin file as part of our Theme, we could apply these attributes, along with many others, to all like controls at one time. This can really save you a lot of development time. As we go through designing the rest of the CMS site, we will continue to revisit these Theme principles and expand the contents of them, so it is good to keep their functionality in mind as we go along.
Read more
  • 0
  • 0
  • 3523

article-image-set-your-own-profile-mahara-part-1
Packt
12 Mar 2010
7 min read
Save for later

Set up your own Profile in Mahara: Part 1

Packt
12 Mar 2010
7 min read
For this article, we are going to concentrate on the Profile tab of the main menu. The very first thing that most Mahara users want to do is customize their own profile space, making it unique to them. We will show you how to do that. For the following examples, we will be working with Janet Norman of PI Inc, showing you how she has configured her profile space. Why not set up your own profile in the demonstration site as you work through the examples? Let's start by looking at profile information. Profile information Later you will set up your own profile page—showing yourself and your knowledge off to others in an attractively personalized way. However, before you do that, you need to add some profile informati on. Your profile information is the first example we will see of "stuff " that you can add to Mahara. When we say stuff we simply mean informati on, or items that can then be viewed later or arranged into web pages, which we will see when we look at your profile page. You are now going to set up some profile information as "stuff " that you can select from and use. We will look at three types of profile information—your profile itself, profile icons, along with your resumé, goals, and skills. Editing your profile Let's show you how to edit your profile. Any information you enter into your profile is private from everyone except the Mahara Site Administrators. You will get to choose who can view what, later on in the Mahara process. Time for action – editing your profile Click the Profile button on the main menu. You will notice that Mahara has opened the Profile submenu. The Edit Profile tab is selected when you first enter your profile space. Let's take a quick look at Janet's profile. You will notice that the About Me tab is selected. Janet has already entered her name. Say something about yourself! Scroll down to the Introduction section of the About Me page and enter some text. Here is what Janet Norman typed in: Whenever you make any changes, click the Save Profile button at the bottom of the page. Next, click the Contact Information to the right of the About me tab. You will see that you are expected to fill out some telephone numbers and addresses. The first thing you should notice is that you can have more than one e-mail address in a Mahara site. To add another e-mail address, click the link to Add email address. The e-mail address will receive a confi rmati on e-mail from the Mahara site and you will have to go to your e-mail account and follow the link to confirm it is genuine. You can now use radio buttons to toggle which e-mail address you would like to use as default for your account. This selection is important because it is at this address that you will receive system messages. You will also noti ce that you can delete an e-mail address by clicking the small, red-colored cross to the right of the e-mail address. Fill in your contact information on this page. Remember, you don't have to complete all the fields if you don't want to. Click the tab called Messaging. Mahara will bring together the types of people you are likely to engage with in live text, audio, and video conferences. People can display these contact details to each other in their profile page and other web pages. Enter your contact details for the facilities you use on this page. If you are still not using live conferencing tools, perhaps now is the time to start thinking about it. Finally, click the tab called General. On this page enter your Occupation and Industry (remember to click the Save Profile button when you have finished). Janet Norman typed this: What just happened? You have just completed your profile by entering some information about yourself, including your personal information, what messaging/conference tools you use, and your Industry background. Both the Contact Information and Messaging information are private and will only be seen if you add them to a web page. This is because you don't necessarily want anybody in the Mahara site to be able to see your telephone number and address for security reasons. Help!If you have found so far that you wish you had a bit more information about what certain options do, then don't worry! Mahara is very well documented software. On most pages, you will see litt le question mark icons. If ever in doubt, click on these and you will be given very useful and specific help relati ng to your area of doubt. Let's now continue to add some more stuff into our profile, with profile icons. Profile icons Profile icons bring your profile to life! They are the first thing that people see about you when they are interacting with you in different areas of the site. Mahara allows you to upload up to five different profile icons. This becomes very useful when you are making web pages out of your stuff . You can present yourself to different audiences in different ways, simply by altering your profie icon. For example, you can display a serious passport photo to your professional work colleagues, a more informal photo to your closest work colleagues, perhaps an avatar for public groups where you would like to be a bit more anonymous, and a picture of you having fun at a party for some of your more social interactions. Time for action – uploading your profile icons Let's get a few different profile icons online. Click the Profile submenu button called Profile Icons. Click Browse to find the profi le icon you want to upload from your computer or USB stick (or wherever). Don't forget to add an Image Title for your profile icon before you click the upload button. You are allowed to upload up to five profile icons and you can delete any icon at any point. You will need to choose one of your icons as your default profile icon which should probably be a fairly sensible one. Janet Norman has already uploaded two profile icons: What just happened? You have just uploaded a profile icon to represent yourself in your Mahara site. As we saw in the Time for action section, Janet has uploaded two icons. One of these is an avatar of herself and the other is the company logo. She plans to use the company logo in places where she would like to appear more professional, whereas the avatar will be used more generally. Make yourself an avatar! An avatar is simply a character or cartoon representation of yourself. If you don't want a passport photograph as your profile icon, an avatar is a good alternative. There are many websites that help you create your own. A few of the most fun include the Simpsons Avatar Maker (http://www.simpsonsmovie.com), DoppelMe (http://www.doppelme.com), and Mr Picassohead (http://www.mrpicassohead.com). Editing your resumé goals and skills No longer will you need to trawl through ancient hard drives trying to find the resumé you last wrote five years ago. Instead, you can keep your resumé information within your Mahara system and update it when you make changes. How impressive will you look when you show your resumé to your prospective employer as a web page rather than on a piece of paper. There are three tabs remaining in the profile submenu that allow us to add stuff to our site. The remaining things we can add are: Resumé information: You can record your career and educational achievements. Goals information: Here you can set ourselves personal, academic, and career-related targets for your future. Skills information: You can record for yourselves what you perceive to be your personal, academic, and work-related skills.
Read more
  • 0
  • 3
  • 3522

article-image-managing-articles-using-k2-content-construction-kit
Packt
27 Oct 2010
8 min read
Save for later

Managing Articles Using the K2 Content Construction Kit

Packt
27 Oct 2010
8 min read
  Joomla! 1.5 Cookbook The reader would benefit from the previous article on Installation and Introduction of K2 Working with items AKA articles The power of K2 is in the idea of categorizing your data, thus making it easier to manage. This will be especially helpful as your site grows in content. Many sites are fully article-based and it is not uncommon to see a site with thousands of articles on it. In this section, we'll tackle some more category-specific recipes. You may have noticed by now that data does not show up as typical articles do in Joomla!. In other words, if you added an item, set it published and featured, it may not be displayed on your site because you have not set up a menu item to your K2 content. K2 will need to be added to your menu structure to display the items (articles) in K2. The first recipe will take into account a site that has been in operation for a while and has K2 added to it. Getting ready This section assumes you have installed K2 and have content on your site. How to do it... Make sure you have a full backup of the database and the files. Log in as the administrator. Open the K2 Dashboard. If you DID NOT import your content in, (see the first recipe), do so now. If you have ALREADY imported your content using the Import Joomla! Content button - DO NOT import again. You run the risk of duplicating all your content. Should this happen, you can go in and delete the duplicate items. This can be a time-consuming process. Open Article Manager | Content | Article Manager. Select all your articles from the Article Manager and unpublish. Open Menu Manager and find your Home menu.Now that we have unpublished content, we'll need to replace the traditional Joomla! content items with K2 content. Opening the Menu Manager and selecting the Home menu item will show this: As you can see under K2 there are several choices to display content on your site. I will choose Item | Item as my display mode. This will show my visitors content in article form. You can pick what works best for you. Now returning to the instructions: After choosing Menu Item Type - click Save. Open K2 Dashboard. Select Items.Here is a partial screenshot of the items in our sample site. As you can see, it now starts to take on a bit more traditional Joomla! look. I can choose featured articles, publish them, or note. Set the order they show up in, the category they belong to and more. When you import content, from Joomla!, the articles retain their identity from Section and Category configuration. For example, the Joomla! Community Portal listed in the preceding screenshot as belonging to the category Latest has a parent category of News. When you imported the content, sections became the new K2 top-level categories. All existing categories become subcategories of the new top level categories. As we added K2 to a working site with sections and category data already in place, I want to make sure they inherit from our master category. In our sample site, we see the following screen when we open the K2 categories from the K2 Dashboard: We instruct the new top-level categories to follow the master category as the model for the rest. The following instructions will show you how. Open K2 Dashboard. Click Categories. Open your imported top-level categories - for this site it's About Joomla! and News. Each of these has sub-categories. Click About Joomla! (or your equivalent). Change the Inherit parameter options from category to MASTER CATEGORY USE AS INHERIT ONLY. Make sure the Parent category stays set to –None--. Click Save.When done, it will look like this: Extra fields Did you notice the Associated "Extra Fields Group" is set to - None - ? You can change this parent category group to use an extra fields group and still keep the master category parameters. Each of the subcategories will inherit from the master category. By doing this, you can still control all the categories parameters simply by changing the master category. How it works... The category system as described here for K2 is a giant access-control system allowing you the flexibility to structure your site and data as you need. It also offers a means to control the 'look and feel' of the articles from a central place. When you import a Joomla! site into K2 you bring all the sections, content, articles, and other associated parts into it. Sections become new parent categories and the old categories become subcategories. This can be a bit confusing at first. One suggestion is to write out on paper what you want the site to look like, and then lay out your categories. You might find that the structure you had can be more user-friendly using K2 and you will want to change. This category system offers you nearly unlimited means to nest articles. In essence, a category can have unlimited categories under it. There is a limit to this in terms of management, but you get the idea. There's more... Using tags in K2 will give you the ability to improve your Search Engine Optimization or SEO on your site. Additionally, the use of tags will allow you to give your users the ability to follow the tags to other articles. In this section we'll review how to use Tags in K2. Tags are keywords or terms that are assigned to your content. This enables your visitors to quickly locate what they need by one word descriptions. Using Tags in K2 Tags can be created before an article is written or on the fly. I prefer on the fly as it will match the article. You can think of a tag almost as a dynamic index. Every time a tag is added to an article, it will show up in the K2 Tag Cloud module if you are using it. The more a single tag, such as Joomla!, is used in the content, the larger it appears in the K2 Cloud module. K2 Tag Clouds can benefit your search engine optimization and a navigational element. Here is an example of our K2 Tag Cloud: This is an image of our K2 Tag Cloud module. The more often a tag is added to an article, the larger it appears. Setting up your site for Tag Clouds K2 installs the K2 Tools module by default. The module has many functions, but for our purposes here, we'll use the Tag module. Log in to the Administrator Console of Joomla!. Click Extensions | Module Manager. Click New to create a new module. Find this for your new item: Once in there, give it a name and select its module location. On the right under Parameters, pull down the Select module functionality drop-down list as follows: Select Tag Cloud as shown in the preceding screenshot. Leave all the root categories set for none - this will enable K2 to pull in all the categories. Click Save. This particular module, has many functions and you can set up a new module to use any of the great tools built into it. Next you will want to add some tags to articles. As I said at the beginning of this article, you have two different ways to do this. You may add them to the article or you may add them to the Tag Manager. Let's quickly review the latter method. Open K2 Dashboard. Click Tags. You may see a list of tags there. If you wish to delete them, simply check the ones you want to remove and click Delete in the upper right-hand corner. Otherwise just leave them. Click New which will open the Details box. Fill in the tag; make sure it's published and click Save. This is an example of a filled out tag box (before save). Adding Tags on the fly This model allows you to tag the content as soon as you create it. If there are tags available, already such as those from the previous step, then you can add them. Open K2 Dashboard. Click Items. Select an item or click New to create an item. The field Tags will be blank, you can start to type in a field, such as K2 Content Creation Kit (as shown in the preceding screenshot). If it exists, then it will be available to be able to click and add. If there are no tags available, then simply type one in and click Return or add a comma. Here is an example item with tags. Here we have four tags, Security x, PHP x, Joomla x, K2 Content Creation Kit x. Any item (article) that has these tags will be easily found by both users and search bots. Let's see how our Tag Cloud looks now: You probably notice the changes, especially the addition of the new tag K2 Content Creation Kit. Clicking on that tag will yield two articles, and clicking on the Security tag yields three. Search engines can follow these links to better categorize your site. Users can get a sense of what is more important in terms of content from your site and it helps them navigate. Closing on this, I strongly suggest you spend time picking tags that are important on your site and is relevant to the purpose of it.
Read more
  • 0
  • 0
  • 3516
article-image-what-openlayers
Packt
13 May 2013
4 min read
Save for later

What is OpenLayers?

Packt
13 May 2013
4 min read
(For more resources related to this topic, see here.) As Christopher Schmidt, one of the main project developers, wrote on the OpenLayers users mailing list: OpenLayers is not designed to be usable out of the box. It is a library designed to help you to build applications, so it's your job as an OpenLayers user to build the box. Don't be scared! Building the box could be very easy and fun! The only two things you actually need to write your code and see it up and running are a text editor and a common web browser. With these tools you can create your Hello World web map, even without downloading anything and writing no more than a basic HTML template and a dozen line of JavaScript code. Going forward, step-by-step, you will realize that OpenLayers is not only easy to learn but also very powerful. So, whether you want to embed a simple web map in your website or you want to develop an advanced mash-up application by importing spatial data from different sources and in different formats, OpenLayers will probably prove to be a very good choice. The strengths of OpenLayers are many and reside, first of all, in its compliance with the Open Geospatial Consortium ( OGC ) standards, making it capable to work together with all major and most common spatial data servers. This means you can connect your client application to web services spread as WMS, WFS, or GeoRSS, add data from a bunch of raster and vector file formats such as GeoJSON and GML, and organize them in layers to create your original web mapping applications. From what has been said until now, it is clear that OpenLayers is incredibly flexible in reading spatial data, but another very important characteristic is that it is also very effective in helping you in the process of optimizing the performances of your web maps by easily defining the strategies with which spatial data are requested and (for vectors) imported on the client side. FastMap and OpenLayers make it possible to obtain them! As we already said at the beginning, web maps created with OpenLayers are interactive, so users can (and want to) do more than simply looking at your creation. To build this interactivity, OpenLayers provides you with a variety of controls that you can make available to your users. Tools to pan, zoom, or query the map give users the possibility to actually explore the content of the map and the spatial data displayed on it. We could say that controls bring maps to life and you will learn how to take advantage from them in a few easy steps. Fast loading and interactivity are important, but in many cases a crucial aspect in the process of developing a web map is to make it instantly readable. Isn't it useful to build web maps if the users they are dedicated to need to spend too much time before understanding what they are looking at? Fortunately, OpenLayers comes with a wide range of possibilities to styling features in vector layers. You can choose between different vector features, rendering strategies, and customize every aspect of their graphics to make your maps expressive, actually "talking" and—why not?—cool! Finally, as you probably remember, OpenLayers is pure JavaScript, and JavaScript is also the language of a lot of fantastic Rich Internet Application ( RIA) frameworks. Mixing OpenLayers and one of these frameworks opens a wide range of possibilities to obtain very advanced and attractive web mapping applications Resources for Article : Further resources on this subject: Getting Started with OpenLayers [Article] OpenLayers: Overview of Vector Layer [Article] Getting Started with OpenStreetMap [Article]
Read more
  • 0
  • 0
  • 3511

article-image-manipulation-dom-objects-using-firebug
Packt
16 Apr 2010
3 min read
Save for later

Manipulation of DOM Objects using Firebug

Packt
16 Apr 2010
3 min read
Inspecting DOM The DOM inspector allows for full, in-place editing of our document structure, not just text nodes. In the DOM inspector, Firebug auto completes property value when we press the Tab key. The following are the steps to inspect an element under the DOM tab: Press Ctrl+Shift+C—the shortcut key to open Firebug in inspect mode. Let's move the mouse pointer over the HTML element that we want to inspect and click on that element. The HTML script of that element will be shown in Firebug's HTML tab. Right-clicking on the selected DOM element will open a context menu. Let's select the Inspect in DOM Tab option from the context menu. As soon as we do that, Firebug will take us to its DOM tab. Filtering properties, functions, and constants Many times we want to analyze whether a function written by us is associated with an HTML element. Firebug provides us an easy way to figure out whether an event, listener, function, property, or constants are associated with a particular element. The DOM tab is not only a tab but also a drop-down menu. When we click on the down arrow icon on the DOM tab, Firebug will show a drop-down list from which one can select the filtering options and inspect the element thoroughly. The following are the options provided by this menu: Show User-defined Properties Show User-defined Functions Show DOM Properties Show DOM Functions Show DOM Constants Refresh There are two kinds of objects and functions: Part of the standard DOM Part of our own JavaScript code Firebug can notify the difference, and shows us our own script-created objects and functions in bold at the top of the list. The text that is bold and green is a user-defined function. The text that is bold and black is a user-defined property. The text whose size is normal and is green in color is a DOM-defined function. The text whose size is normal and is black in color is a DOM-defined property. The upper case letters (capital letters) are the DOM constants. We can see the actual colored depiction in Firebug's DOM tab. In the following code, the onkeyup() event is a user-defined function for <input/> and calculatefactorial() is a user-defined function for the current window. To test this code, let's type the code in an HTML file, open it with Firefox, and enable Firebug by pressing the F12 key. Inspect the input element in the DOM. <html><head><script>function calculateFactorial(num,event){if(event.keyCode!=13){return;}var fact=1;for(i=1;i<=num;i++){fact*=i;}alert ("The Factorial of "+ num + " is: " +fact)}</script><title>code_6_1.html.html</title></head><body><font face="monospace">Enter a number to calculate its factorial<input type = "text" name="searchBox" onkeyup="calculateFactorial(this.value,event)"/></font></body></html> Intuitive DOM element summariesThere are many different kinds of DOM and JavaScript objects, and Firebug does its best to visually distinguish each, while providing as much information as possible. When appropriate, objects include brief summaries of their contents so that we can see what's there without having to click. Objects are color coded so that HTML elements, numbers, strings, functions, arrays, objects, and nulls are all easy to distinguish.
Read more
  • 0
  • 0
  • 3508

article-image-advanced-less-coding
Packt
09 Feb 2015
40 min read
Save for later

Advanced Less Coding

Packt
09 Feb 2015
40 min read
In this article by Bass Jobsen, author of the book Less Web Development Cookbook, you will learn: Giving your rules importance with the !important statement Using mixins with multiple parameters Using duplicate mixin names Building a switch leveraging argument matching Avoiding individual parameters to leverage the @arguments variable Using the @rest... variable to use mixins with a variable number of arguments Using mixins as functions Passing rulesets to mixins Using mixin guards (as an alternative for the if…else statements) Building loops leveraging mixin guards Applying guards to the CSS selectors Creating color contrasts with Less Changing the background color dynamically Aggregating values under a single property (For more resources related to this topic, see here.) Giving your rules importance with the !important statement The !important statement in CSS can be used to get some style rules always applied no matter where that rules appears in the CSS code. In Less, the !important statement can be applied with mixins and variable declarations too. Getting ready You can write the Less code for this recipe with your favorite editor. After that, you can use the command-line lessc compiler to compile the Less code. Finally, you can inspect the compiled CSS code to see where the !important statements appear. To see the real effect of the !important statements, you should compile the Less code client side, with the client-side compiler less.js and watch the effect in your web browser. How to do it… Create an important.less file that contains the code like the following snippet: .mixin() { color: red; font-size: 2em; } p { &.important {    .mixin() !important; } &.unimportant {    .mixin(); } } After compiling the preceding Less code with the command-line lessc compiler, you will find the following code output produced in the console: p.important { color: red !important; font-size: 2em !important; } p.unimportant { color: red; font-size: 2em; } You can, for instance, use the following snippet of the HTML code to see the effect of the !important statements in your browser: <p class="important"   style="color:green;font-size:4em;">important</p> <p class="unimportant"   style="color:green;font-size:4em;">unimportant</p> Your HTML document should also include the important.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="important.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in the following screenshot:  How it works… In Less, you can use the !important statement not only for properties, but also with mixins. When !important is set for a certain mixin, all properties of this mixin will be declared with the !important statement. You can easily see this effect when inspecting the properties of the p.important selector, both the color and size property got the !important statement after compiling the code. There's more… You should use the !important statements with care as the only way to overrule an !important statement is to use another !important statement. The !important statement overrules the normal CSS cascading, specificity rules, and even the inline styles. Any incorrect or unnecessary use of the !important statements in your Less (or CCS) code will make your code messy and difficult to maintain. In most cases where you try to overrule a style rule, you should give preference to selectors with a higher specificity and not use the !important statements at all. With Less V2, you can also use the !important statement when declaring your variables. A declaration with the !important statement can look like the following code: @main-color: darkblue !important; Using mixins with multiple parameters In this section, you will learn how to use mixins with more than one parameter. Getting ready For this recipe, you will have to create a Less file, for instance, mixins.less. You can compile this mixins.less file with the command-line lessc compiler. How to do it… Create the mixins.less file and write down the following Less code into it: .mixin(@color; @background: black;) { background-color: @background; color: @color; } div { .mixin(red; white;); } Compile the mixins.less file by running the command shown in the console, as follows: lessc mixins.less Inspect the CSS code output on the console, and you will find that it looks like that shown, as follows: div { background-color: #ffffff; color: #ff0000; } How it works… In Less, parameters are either semicolon-separated or comma-separated. Using a semicolon as a separator will be preferred because the usage of the comma will be ambiguous. The comma separator is not used only to separate parameters, but is also used to define a csv list, which can be an argument itself. The mixin in this recipe accepts two arguments. The first parameter sets the @color variable, while the second parameter sets the @background variable and has a default value that has been set to black. In the argument list, the default values are defined by writing a colon behind the variable's name, followed by the value. Parameters with a default value are optional when calling the mixins. So the .color mixin in this recipe can also be called with the following line of code: .mixin(red); Because the second argument has a default value set to black, the .mixin(red); call also matches the .mixin(@color; @background:black){} mixin, as described in the Building a switch leveraging argument matching recipe. Only variables set as parameter of a mixin are set inside the scope of the mixin. You can see this when compiling the following Less code: .mixin(@color:blue){ color2: @color; } @color: red; div { color1: @color; .mixin; } The preceding Less code compiles into the following CSS code: div { color1: #ff0000; color2: #0000ff; } As you can see in the preceding example, setting @color inside the mixin to its default value does not influence the value of @color assigned in the main scope. So lazy loading is applied on only variables inside the same scope; nevertheless, you will have to note that variables assigned in a mixin will leak into the caller. The leaking of variables can be used to use mixins as functions, as described in the Using mixins as functions recipe. There's more… Consider the mixin definition in the following Less code: .mixin(@font-family: "Helvetica Neue", Helvetica, Arial,   sans-serif;) { font-family: @font-family; } The semicolon added at the end of the list prevents the fonts after the "Helvetica Neue" font name in the csv list from being read as arguments for this mixin. If the argument list contains any semicolon, the Less compiler will use semicolons as a separator. In the CSS3 specification, among others, the border and background shorthand properties accepts csv. Also, note that the Less compiler allows you to use the named parameters when calling mixins. This can be seen in the following Less code that uses the @color variable as a named parameter: .mixin(@width:50px; @color: yellow) { width: @width; color: @color; } span { .mixin(@color: green); } The preceding Less code will compile into the following CSS code: span { width: 50px; color: #008000; } Note that in the preceding code, #008000 is the hexadecimal representation for the green color. When using the named parameters, their order does not matter. Using duplicate mixin names When your Less code contains one or more mixins with the same name, the Less compiler compiles them all into the CSS code. If the mixin has parameters (see the Building a switch leveraging argument matching recipe) the number of parameters will also match. Getting ready Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a file called mixins.less that contains the following Less code: .mixin(){ height:50px; } .mixin(@color) { color: @color; }   .mixin(@width) { color: green; width: @width; }   .mixin(@color; @width) { color: @color; width: @width; }   .selector-1 { .mixin(red); } .selector-2 { .mixin(red; 500px); } Compile the Less code from step 1 by running the following command in the console: lessc mixins.less After running the command from the previous step, you will find the following Less code output on the console: .selector-1 { color: #ff0000; color: green; width: #ff0000; } .selector-2 { color: #ff0000; width: 500px; } How it works… The .selector-1 selector contains the .mixin(red); call. The .mixin(red); call does not match the .mixin(){}; mixin as the number of arguments does not match. On the other hand, both .mixin(@color){}; and .mixin(@width){}; match the color. For this reason, these mixins will compile into the CSS code. The .mixin(red; 500px); call inside the .selector-2 selector will match only the .mixin(@color; @width){}; mixin, so all other mixins with the same .mixin name will be ignored by the compiler when building the .selector-2 selector. The compiled CSS code for the .selector-1 selector also contains the width: #ff0000; property value as the .mixin(@width){}; mixin matches the call too. Setting the width property to a color value makes no sense in CSS as the Less compiler does not check for this kind of errors. In this recipe, you can also rewrite the .mixin(@width){}; mixin, as follows: .mixin(@width) when (ispixel(@width)){};. There's more… Maybe you have noted that the .selector-1 selector contains two color properties. The Less compiler does not remove duplicate properties unless the value also is the same. The CSS code sometimes should contain duplicate properties in order to provide a fallback for older browsers. Building a switch leveraging argument matching The Less mixin will compile into the final CSS code only when the number of arguments of the caller and the mixins match. This feature of Less can be used to build switches. Switches enable you to change the behavior of a mixin conditionally. In this recipe, you will create a mixin, or better yet, three mixins with the same name. Getting ready Use the command-line lessc compiler to evaluate the effect of this mixin. The compiler will output the final CSS to the console. You can use your favorite text editor to edit the Less code. This recipe makes use of browser-vendor prefixes, such as the -ms-transform prefix. CSS3 introduced vendor-specific rules, which offer you the possibility to write some additional CSS, applicable for only one browser. These rules allow browsers to implement proprietary CSS properties that would otherwise have no working standard (and might never actually become the standard). To find out which prefixes should be used for a certain property, you can consult the Can I use database (available at http://caniuse.com/). How to do it… Create a switch.less Less file, and write down the following Less code into it: @browserversion: ie9; .mixin(ie9; @degrees){ transform:rotate(@degrees); -ms-transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(ie10; @degrees){ transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(@_; @degrees){ transform:rotate(@degrees); } div { .mixin(@browserversion; 70deg); } Compile the Less code from step 1 by running the following command in the console: lessc switch.less Inspect the compiled CSS code that has been output to the console, and you will find that it looks like the following code: div { -ms-transform: rotate(70deg); -webkit-transform: rotate(70deg); transform: rotate(70deg); } Finally, run the following command and you will find that the compiled CSS wll indeed differ from that of step 2: lessc --modify-var="browserversion=ie10" switch.less Now the compiled CSS code will look like the following code snippet: div { -webkit-transform: rotate(70deg); transform: rotate(70deg); } How it works… The switch in this recipe is the @browserversion variable that can easily be changed just before compiling your code. Instead of changing your code, you can also set the --modify-var option of the compiler. Depending on the value of the @browserversion variable, the mixins that match will be compiled, and the other mixins will be ignored by the compiler. The .mixin(ie10; @degrees){} mixin matches the .mixin(@browserversion; 70deg); call only when the value of the @browserversion variable is equal to ie10. Note that the first ie10 argument of the mixin will be used only for matching (argument = ie10) and does not assign any value. You will note that the .mixin(@_; @degrees){} mixin will match each call no matter what the value of the @browserversion variable is. The .mixin(ie9,70deg); call also compiles the .mixin(@_; @degrees){} mixin. Although this should result in the transform: rotate(70deg); property output twice, you will find only one. Since the property got exactly the same value twice, the compiler outputs the property only once. There's more… Not only switches, but also mixin guards, as described in the Using mixin guards (as an alternative for the if…else statements) recipe, can be used to set some properties conditionally. Current versions of Less also support JavaScript evaluating; JavaScript code put between back quotes will be evaluated by the compiler, as can be seen in the following Less code example: @string: "example in lower case"; p { &:after { content: "`@{string}.toUpperCase()`"; } } The preceding code will be compiled into CSS, as follows: p:after { content: "EXAMPLE IN LOWER CASE"; } When using client-side compiling, JavaScript evaluating can also be used to get some information from the browser environment, such as the screen width (screen.width), but as mentioned already, you should not use client-side compiling for production environments. Because you can't be sure that future versions of Less still support JavaScript evaluating, and alternative compilers not written in JavaScript cannot evaluate the JavaScript code, you should always try to write your Less code without JavaScript. Avoiding individual parameters to leverage the @arguments variable In the Less code, the @arguments variable has a special meaning inside mixins. The @arguments variable contains all arguments passed to the mixin. In this recipe, you will use the @arguments variable together with the the CSS url() function to set a background image for a selector. Getting ready You can inspect the compiled CSS code in this recipe after compiling the Less code with the command-line lessc compiler. Alternatively, you can inspect the results in your browser using the client-side less.js compiler. When inspecting the result in your browser, you will also need an example image that can be used as a background image. Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a background.less file that contains the following Less code: .background(@color; @image; @repeat: no-repeat; @position:   top right;) { background: @arguments; }   div { .background(#000; url("./images/bg.png")); width:300px; height:300px; } Finally, inspect the compiled CSS code, and you will find that it will look like the following code snippet: div { background: #000000 url("./images/bg.png") no-repeat top     right; width: 300px; height: 300px; } How it works… The four parameters of the .background() mixin are assigned as a space-separated list to the @arguments variable. After that, the @arguments variable can be used to set the background property. Also, other CSS properties accept space-separated lists, for example, the margin and padding properties. Note that the @arguments variable does not contain only the parameters that have been set explicit by the caller, but also the parameters set by their default value. You can easily see this when inspecting the compiled CSS code of this recipe. The .background(#000; url("./images/bg.png")); caller doesn't set the @repeat or @position argument, but you will find their values in the compiled CSS code. Using the @rest... variable to use mixins with a variable number of arguments As you can also see in the Using mixins with multiple parameters and Using duplicate mixin names recipes, only matching mixins are compiled into the final CSS code. In some situations, you don't know the number of parameters or want to use mixins for some style rules no matter the number of parameters. In these situations, you can use the special ... syntax or the @rest... variable to create mixins that match independent of the number of parameters. Getting ready You will have to create a file called rest.less, and this file can be compiled with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called rest.less that contains the following Less code: .mixin(@a...) { .set(@a) when (iscolor(@a)) {    color: @a; } .set(@a) when (length(@a) = 2) {    margin: @a; } .set(@a); } p{ .mixin(red); } p { .mixin(2px;4px); } Compile the rest.less file from step 1 using the following command in the console: lessc rest.less Inspect the CSS code output to the console that will look like the following code: p { color: #ff0000; } p { margin: 2px 4px; } How it works… The special ... syntax (three dots) can be used as an argument for a mixin. Mixins with the ... syntax in their argument list match any number of arguments. When you put a variable name starting with an @ in front of the ... syntax, all parameters are assigned to that variable. You will find a list of examples of mixins that use the special ... syntax as follows: .mixin(@a; ...){}: This mixin matches 1-N arguments .mixin(...){}: This mixin matches 0-N arguments; note that mixin() without any argument matches only 0 arguments .mixin(@a: 1; @rest...){}: This mixin matches 0-N arguments; note that the first argument is assigned to the @a variable, and all other arguments are assigned as a space-separated list to @rest Because the @rest... variable contains a space-separated list, you can use the Less built-in list function. Using mixins as functions People who are used to functional programming expect a mixin to change or return a value. In this recipe, you will learn to use mixins as a function that returns a value. In this recipe, the value of the width property inside the div.small and div.big selectors will be set to the length of the longest side of a right-angled triangle based on the length of the two shortest sides of this triangle using the Pythagoras theorem. Getting ready The best and easiest way to inspect the results of this recipe will be compiling the Less code with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called pythagoras.less that contains the following Less code: .longestSide(@a,@b) { @length: sqrt(pow(@a,2) + pow(@b,2)); } div { &.small {    .longestSide(3,4);    width: @length; } &.big {    .longestSide(6,7);    width: @length; } } Compile the pythagoras.less file from step 1 using the following command in the console: lessc pyhagoras.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code snippet: div.small { width: 5; } div.big { width: 9.21954446; } How it works… Variables set inside a mixin become available inside the scope of the caller. This specific behavior of the Less compiler was used in this recipe to set the @length variable and to make it available in the scope of the div.small and div.big selectors and the caller. As you can see, you can use the mixin in this recipe more than once. With every call, a new scope is created and both selectors get their own value of @length. Also, note that variables set inside the mixin do not overwrite variables with the same name that are set in the caller itself. Take, for instance, the following code: .mixin() { @variable: 1; } .selector { @variable: 2; .mixin; property: @variable; } The preceding code will compile into the CSS code, as follows: .selector { property: 2; } There's more… Note that variables won't leak from the mixins to the caller in the following two situations: Inside the scope of the caller, a variable with the same name already has been defined (lazy loading will be applied) The variable has been previously defined by another mixin call (lazy loading will not be applied) Passing rulesets to mixins Since Version 1.7, Less allows you to pass complete rulesets as an argument for mixins. Rulesets, including the Less code, can be assigned to variables and passed into mixins, which also allow you to wrap blocks of the CSS code defined inside mixins. In this recipe, you will learn how to do this. Getting ready For this recipe, you will have to create a Less file called keyframes.less, for instance. You can compile this mixins.less file with the command-line lessc compiler. Finally, inspect the Less code output to the console. How to do it… Create the keyframes.less file, and write down the following Less code into it: // Keyframes .keyframe(@name; @roules) { @-webkit-keyframes @name {    @roules(); } @-o-keyframes @name {    @roules(); } @keyframes @name {    @roules(); } } .keyframe(progress-bar-stripes; { from { background-position: 40px 0; } to   { background-position: 0 0; } }); Compile the keyframes.less file by running the following command shown in the console: lessc keyframes.less Inspect the CSS code output on the console and you will find that it looks like the following code: @-webkit-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } How it works… Rulesets wrapped between curly brackets are passed as an argument to the mixin. A mixin's arguments are assigned to a (local) variable. When you assign the ruleset to the @ruleset variable, you are enabled to call @ruleset(); to "mixin" the ruleset. Note that the passed rulesets can contain the Less code, such as built-in functions too. You can see this by compiling the following Less code: .mixin(@color; @rules) { @othercolor: green; @media (print) {    @rules(); } }   p { .mixin(red; {color: lighten(@othercolor,20%);     background-color:darken(@color,20%);}) } The preceding Less code will compile into the following CSS code: @media (print) { p {    color: #00e600;    background-color: #990000; } } A group of CSS properties, nested rulesets, or media declarations stored in a variable is called a detached ruleset. Less offers support for the detached rulesets since Version 1.7. There's more… As you could see in the last example in the previous section, rulesets passed as an argument can be wrapped in the @media declarations too. This enables you to create mixins that, for instance, wrap any passed ruleset into a @media declaration or class. Consider the example Less code shown here: .smallscreens-and-olderbrowsers(@rules) { .lt-ie9 & {    @rules(); } @media (min-width:768px) {    @rules(); } } nav { float: left; width: 20%; .smallscreens-and-olderbrowsers({    float: none;    width:100%; }); } The preceding Less code will compile into the CSS code, as follows: nav { float: left; width: 20%; } .lt-ie9 nav { float: none; width: 100%; } @media (min-width: 768px) { nav {    float: none;    width: 100%; } } The style rules wrapped in the .lt-ie9 class can, for instance, be used with Paul Irish's <html> conditional classes technique or Modernizr. Now you can call the .smallscreens-and-olderbrowsers(){} mixin anywhere in your code and pass any ruleset to it. All passed rulesets get wrapped in the .lt-ie9 class or the @media (min-width: 768px) declaration now. When your requirements change, you possibly have to change only these wrappers once. Using mixin guards (as an alternative for the if…else statements) Most programmers are used to and familiar with the if…else statements in their code. Less does not have these if…else statements. Less tries to follow the declarative nature of CSS when possible and for that reason uses guards for matching expressions. In Less, conditional execution has been implemented with guarded mixins. Guarded mixins use the same logical and comparison operators as the @media feature in CSS does. Getting ready You can compile the Less code in this recipe with the command-line lessc compiler. Also, check the compiler options; you can find them by running the lessc command in the console without any argument. In this recipe, you will have to use the –modify-var option. How to do it… Create a Less file named guards.less, which contains the following Less code: @color: white; .mixin(@color) when (luma(@color) >= 50%) { color: black; } .mixin(@color) when (luma(@color) < 50%) { color: white; }   p { .mixin(@color); } Compile the Less code in the guards.less using the command-line lessc compiler with the following command entered in the console: lessc guards.less Inspect the output written on the console, which will look like the following code: p { color: black; } Compile the Less code with different values set for the @color variable and see how to output change. You can use the command as follows: lessc --modify-var="color=green" guards.less The preceding command will produce the following CSS code: p {   color: white;   } Now, refer to the following command: lessc --modify-var="color=lightgreen" guards.less With the color set to light green, it will again produce the following CSS code: p {   color: black;   }   How it works… The use of guards to build an if…else construct can easily be compared with the switch expression, which can be found in the programming languages, such as PHP, C#, and pretty much any other object-oriented programming language. Guards are written with the when keyword followed by one or more conditions. When the condition(s) evaluates true, the code will be mixed in. Also note that the arguments should match, as described in the Building a switch leveraging argument matching recipe, before the mixin gets compiled. The syntax and logic of guards is the same as that of the CSS @media feature. A condition can contain the following comparison operators: >, >=, =, =<, and < Additionally, the keyword true is the only value that evaluates as true. Two or more conditionals can be combined with the and keyword, which is equivalent to the logical and operator or, on the other hand, with a comma as the logical or operator. The following code will show you an example of the combined conditionals: .mixin(@a; @color) when (@a<10) and (luma(@color) >= 50%) { } The following code contains the not keyword that can be used to negate conditions: .mixin(@a; @color) when not (luma(@color) >= 50%) { } There's more… Inside the guard conditions, (global) variables can also be compared. The following Less code example shows you how to use variables inside guards: @a: 10; .mixin() when (@a >= 10) {} The preceding code will also enable you to compile the different CSS versions with the same code base when using the modify-var option of the compiler. The effect of the guarded mixin described in the preceding code will be very similar with the mixins built in the Building a switch leveraging argument matching recipe. Note that in the preceding example, variables in the mixin's scope overwrite variables from the global scope, as can be seen when compiling the following code: @a: 10; .mixin(@a) when (@a < 10) {property: @a;} selector { .mixin(5); } The preceding Less code will compile into the following CSS code: selector { property: 5; } When you compare guarded mixins with the if…else constructs or switch expressions in other programming languages, you will also need a manner to create a conditional for the default situations. The built-in Less default() function can be used to create such a default conditional that is functionally equal to the else statement in the if…else constructs or the default statement in the switch expressions. The default() function returns true when no other mixins match (matching also takes the guards into account) and can be evaluated as the guard condition. Building loops leveraging mixin guards Mixin guards, as described besides others in the Using mixin guards (as an alternative for the if…else statements) recipe, can also be used to dynamically build a set of CSS classes. In this recipe, you will learn how to do this. Getting ready You can use your favorite editor to create the Less code in this recipe. How to do it… Create a shadesofblue.less Less file, and write down the following Less code into it: .shadesofblue(@number; @blue:100%) when (@number > 0) {   .shadesofblue(@number - 1, @blue - 10%);   @classname: e(%(".color-%a",@number)); @{classname} {    background-color: rgb(0, 0, @blue);    height:30px; } } .shadesofblue(10); You can, for instance, use the following snippet of the HTML code to see the effect of the compiled Less code from the preceding step: <div class="color-1"></div> <div class="color-2"></div> <div class="color-3"></div> <div class="color-4"></div> <div class="color-5"></div> <div class="color-6"></div> <div class="color-7"></div> <div class="color-8"></div> <div class="color-9"></div> <div class="color-10"></div> Your HTML document should also include the shadesofblue.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="shadesofblue.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in this screenshot: How it works… The CSS classes in this recipe are built with recursion. The recursion here has been done by the .shadesofblue(){} mixin calling itself with different parameters. The loop starts with the .shadesofblue(10); call. When the compiler reaches the .shadesofblue(@number - 1, @blue – 10%); line of code, it stops the current code and starts compiling the .shadesofblue(){} mixin again with @number decreased by one and @blue decreased by 10 percent. The process will be repeated till @number < 1. Finally, when the @number variable becomes equal to 0, the compiler tries to call the .shadesofblue(0,0); mixin, which does not match the when (@number > 0) guard. When no matching mixin is found, the compiler stops, compiles the rest of the code, and writes the first class into the CSS code, as follows: .color-1 { background-color: #00001a; height: 30px; } Then, the compiler starts again where it stopped before, at the .shadesofblue(2,20); call, and writes the next class into the CSS code, as follows: .color-2 { background-color: #000033; height: 30px; } The preceding code will be repeated until the tenth class. There's more… When inspecting the compiled CSS code, you will find that the height property has been repeated ten times, too. This kind of code repeating can be prevented using the :extend Less pseudo class. The following code will show you an example of the usage of the :extend Less pseudo class: .baseheight { height: 30px; } .mixin(@i: 2) when(@i > 0) { .mixin(@i - 1); .class@{i} {    width: 10*@i;    &:extend(.baseheight); } } .mixin(); Alternatively, in this situation, you can create a more generic selector, which sets the height property as follows: div[class^="color"-] { height: 30px; } Recursive loops are also useful when iterating over a list of values. Max Mikhailov, one of the members of the Less core team, wrote a wrapper mixin for recursive Less loops, which can be found at https://github.com/seven-phases-max. This wrapper contains the .for and .-each mixins that can be used to build loops. The following code will show you how to write a nested loop: @import "for"; #nested-loops { .for(3, 1); .-each(@i) {    .for(0, 2); .-each(@j) {      x: (10 * @i + @j);    } } } The preceding Less code will produce the following CSS code: #nested-loops { x: 30; x: 31; x: 32; x: 20; x: 21; x: 22; x: 10; x: 11; x: 12; } Finally, you can use a list of mixins as your data provider in some situations. The following Less code gives an example about using mixins to avoid recursion: .data() { .-("dark"; black); .-("light"; white); .-("accent"; pink); }   div { .data(); .-(@class-name; @color){    @class: e(@class-name);    &.@{class} {      color: @color;    } } } The preceding Less code will compile into the CSS code, as follows: div.dark { color: black; } div.light { color: white; }   div.accent { color: pink; } Applying guards to the CSS selectors Since Version 1.5 of Less, guards can be applied not only on mixins, but also on the CSS selectors. This recipe will show you how to apply guards on the CSS selectors directly to create conditional rulesets for these selectors. Getting ready The easiest way to inspect the effect of the guarded selector in this recipe will be using the command-line lessc compiler. How to do it… Create a Less file named darkbutton.less that contains the following code: @dark: true; button when (@dark){ background-color: black; color: white; } Compile the darkbutton.less file with the command-line lessc compiler by entering the following command into the console: lessc darkbutton.less Inspect the CSS code output on the console, which will look like the following code: button { background-color: black; color: white; } Now try the following command and you will find that the button selector is not compiled into the CSS code: lessc --modify-var="dark=false" darkbutton.less How it works… The guarded CSS selectors are ignored by the compiler and so not compiled into the CSS code when the guard evaluates false. Guards for the CSS selectors and mixins leverage the same comparison and logical operators. You can read in more detail how to create guards with these operators in Using mixin guards (as an alternative for the if…else statements) recipe. There's more… Note that the true keyword will be the only value that evaluates true. So the following command, which sets @dark equal to 1, will not generate the button selector as the guard evaluates false: lessc --modify-var="dark=1" darkbutton.less The following Less code will give you another example of applying a guard on a selector: @width: 700px; div when (@width >= 600px ){ border: 1px solid black; } The preceding code will output the following CSS code: div {   border: 1px solid black;   } On the other hand, nothing will be output when setting @width to a value smaller than 600 pixels. You can also rewrite the preceding code with the & feature referencing the selector, as follows: @width: 700px; div { & when (@width >= 600px ){    border: 1px solid black; } } Although the CSS code produced of the latest code does not differ from the first, it will enable you to add more properties without the need to repeat the selector. You can also add the code in a mixin, as follows: .conditional-border(@width: 700px) {    & when (@width >= 600px ){    border: 1px solid black; } width: @width; } Creating color contrasts with Less Color contrasts play an important role in the first impression of your website or web application. Color contrasts are also important for web accessibility. Using high contrasts between background and text will help the visually disabled, color blind, and even people with dyslexia to read your content more easily. The contrast() function returns a light (white by default) or dark (black by default) color depending on the input color. The contrast function can help you to write a dynamical Less code that always outputs the CSS styles that create enough contrast between the background and text colors. Setting your text color to white or black depending on the background color enables you to meet the highest accessibility guidelines for every color. A sample can be found at http://www.msfw.com/accessibility/tools/contrastratiocalculator.aspx, which shows you that either black or white always gives enough color contrast. When you use Less to create a set of buttons, for instance, you don't want some buttons with white text while others have black text. In this recipe, you solve this situation by adding a stroke to the button text (text shadow) when the contrast ratio between the button background and button text color is too low to meet your requirements. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. You will have to create some HTML and Less code, and you can use your favorite editor to do this. You will have to create the following file structure: How to do it… Create a Less file named contraststrokes.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @buttonTextColor: white; @ContrastRatio: 7; //AAA, small texts   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px black; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px white; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   button { padding:10px; border-radius:10px; color: @buttonTextColor; width:200px; }   .safe { .setcontrast(@safe); background-color: @safe; }   .danger { .setcontrast(@danger); background-color: @danger; }   .warning { .setcontrast(@warning); background-color: @warning; } Create an HTML file, and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>    <link rel="stylesheet/less" type="text/css"       href="contraststrokes.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">     warning</button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like what's shown in the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. How it works… The main purpose of this recipe is to show you how to write dynamical code based on the color contrast ratio. Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations to make web content more accessible. They have defined the following three conformance levels: Conformance Level A: In this level, all Level A success criteria are satisfied Conformance Level AA: In this level, all Level A and AA success criteria are satisfied Conformance Level AAA: In this level, all Level A, AA, and AAA success criteria are satisfied If you focus only on the color contrast aspect, you will find the following paragraphs in the WCAG 2.0 guidelines. 1.4.1 Use of Color: Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (Level A) 1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (Level AA) 1.4.6 Contrast (Enhanced): The visual presentation of text and images of text has a contrast ratio of at least 7:1 (Level AAA) The contrast ratio can be calculated with a formula that can be found at http://www.w3.org/TR/WCAG20/#contrast-ratiodef: (L1 + 0.05) / (L2 + 0.05) In the preceding formula, L1 is the relative luminance of the lighter of the colors, and L2 is the relative luminance of the darker of the colors. In Less, the relative luminance of a color can be found with the built-in luma() function. In the Less code of this recipe are the four guarded .setcontrast(){} mixins. The guard conditions, such as (luma(@backgroundcolor) =< luma(@buttonTextColor)) are used to find which of the @backgroundcolor and @buttonTextColor colors is the lighter one. Then the (((luma({the lighter color})+5)/(luma({the darker color})+5)) < @ContrastRatio) condition can, according to the preceding formula, be used to determine whether the contrast ratio between these colors meets the requirement (@ContrastRatio) or not. When the value of the calculated contrast ratio is lower than the value set by the @ContrastRatio, the text-shadow: 0 0 2px {color}; ruleset will be mixed in, where {color} will be white or black depending on the relative luminance of the color set by the @buttonTextColor variable. There's more… In this recipe, you added a stroke to the web text to improve the accessibility. First, you will have to bear in mind that improving the accessibility by adding a stroke to your text is not a proven method. Also, automatic testing of the accessibility (by calculating the color contrast ratios) cannot be done. Other options to solve this issue are to increase the font size or change the background color itself. You can read how to change the background color dynamically based on color contrast ratios in the Changing the background color dynamically recipe. When you read the exceptions of the 1.4.6 Contrast (Enhanced) paragraph of the WCAG 2.0 guidelines, you will find that large-scale text requires a color contrast ratio of 4.5 instead of 7.0 to meet the requirements of the AAA Level. Large-scaled text is defined as at least 18 point or 14 point bold or font size that would yield the equivalent size for Chinese, Japanese, and Korean (CJK) fonts. To try this, you could replace the text-shadow properties in the Less code of step 1 of this recipe with the font-size, 14pt, and font-weight, bold; declarations. After this, you can inspect the results in your browser again. Depending on, among others, the values you have chosen for the @buttonTextColor and @ContrastRatio variables, you will find something like the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. Note that when you set the @ContrastRatio variable to 7.0, the code does not check whether the larger font indeed meets the 4.5 contrast ratio requirement. Changing the background color dynamically When you define some basic colors to generate, for instance, a set of button elements, you can use the built-in contrast() function to set the font color. The built-in contrast() function provides the highest possible contrast, but does not guarantee that the contrast ratio is also high enough to meet your accessibility requirements. In this recipe, you will learn how to change your basic color automatically to meet the required contrast ratio. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. Use your favorite editor to create the HTML and Less code in this recipe. You will have to create the following file structure: How to do it… Create a Less file named backgroundcolors.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @ContrastRatio: 7.0; //AAA @precision: 1%; @buttonTextColor: black; @threshold: 43;   .setcontrastcolor(@startcolor) when (luma(@buttonTextColor)   < @threshold) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) {    .contrastcolor(lighten(@startcolor,@precision)); } .contrastcolor(@startcolor) when (@startcolor =     color("white")),(((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   .setcontrastcolor(@startcolor) when (default()) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@buttonTextColor)+5)/     (luma(@startcolor)+5)) < @ContrastRatio) {    .contrastcolor(darken(@startcolor,@precision)); } .contrastcolor(@startcolor) when (luma(@startcolor) = 100     ),(((luma(@buttonTextColor)+5)/(luma(@startcolor)+5))       >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   button { padding:10px; border-radius:10px; color:@buttonTextColor; width:200px; }   .safe { .setcontrastcolor(@safe); background-color: @contrastcolor; }   .danger { .setcontrastcolor(@danger); background-color: @contrastcolor; }   .warning { .setcontrastcolor(@warning); background-color: @contrastcolor; } Create an HTML file and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>      <link rel="stylesheet/less" type="text/css"       href="backgroundcolors.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">warning     </button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like the following screenshot: On the left-hand side of the preceding figure, you will see the original colored buttons, and on the right-hand side, you will find the high contrast buttons. How it works… The guarded .setcontrastcolor(){} mixins are used to determine the color set depending upon whether the @buttonTextColor variable is a dark color or not. When the color set by @buttonTextColor is a dark color, with a relative luminance below the threshold value set by the @threshold variable, the background colors should be made lighter. For light colors, the background colors should be made darker. Inside each .setcontrastcolor(){} mixin, a second set of mixins has been defined. These guarded .contrastcolor(){} mixins construct a recursive loop, as described in the Building loops leveraging mixin guards recipe. In each step of the recursion, the guards test whether the contrast ratio that is set by the @ContrastRatio variable has been reached or not. When the contrast ratio does not meet the requirements, the @startcolor variable will darken or lighten by the number of percent set by the @precision variable with the built-in darken() and lighten() functions. When the required contrast ratio has been reached or the color defining the @startcolor variable has become white or black, the modified color value of @startcolor will be assigned to the @contrastcolor variable. The guarded .contrastcolor(){} mixins are used as functions, as described in the Using mixins as functions recipe to assign the @contrastcolor variable that will be used to set the background-color property of the button selectors. There's more… A small value of the @precision variable will increase the number of recursions (possible) needed to find the required colors as there will be more and smaller steps needed. With the number of recursions also, the compilation time will increase. When you choose a bigger value for @precision, the contrast color found might differ from the start color more than needed to meet the contrast ratio requirement. When you choose, for instance, a dark button text color, which is not black, all or some base background colors will be set to white. The chances of finding the highest contrast for white increase for high values of the @ContrastRatio variable. The recursions will stop when white (or black) has been reached as you cannot make the white color lighter. When the recursion stops on reaching white or black, the colors set by the mixins in this recipe don't meet the required color contrast ratios. Aggregating values under a single property The merge feature of Less enables you to merge property values into a list under a single property. Each list can be either space-separated or comma-separated. The merge feature can be useful to define a property that accepts a list as a value. For instance, the background accepts a comma-separated list of backgrounds. Getting ready For this recipe, you will need a text editor and a Less compiler. How to do it… Create a file called defaultfonts.less that contains the following Less code: .default-fonts() { font-family+: Helvetica, Arial, sans-serif; } p { font-family+: "Helvetica Neue"; .default-fonts(); } Compile the defaultfonts.less file from step 1 using the following command in the console: lessc defaultfonts.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code: p { font-family: "Helvetica Neue", Helvetica, Arial, sans-   serif; } How it works… When the compiler finds the plus sign (+) before the assignment sign (:), it will merge the values into a CSV list and will not create a new property into the CSS code. There's more… Since Version 1.7 of Less, you can also merge the property's values separated by a space instead of a comma. For space-separated values, you should use the +_ sign instead of a + sign, as can be seen in the following code: .text-overflow(@text-overflow: ellipsis) { text-overflow+_ : @text-overflow; } p, .text-overflow { .text-overflow(); text-overflow+_ : ellipsis; } The preceding Less code will compile into the CSS code, as follows: p, .text-overflow { text-overlow: ellipsis ellipsis; } Note that the text-overflow property doesn't force an overflow to occur; you will have to explicitly set, for instance, the overflow property to hidden for the element. Summary This article walks you through the process of parameterized mixins and shows you how to use guards. A guard can be used with as if-else statements and make it possible to construct interactive loops in Less. Resources for Article: Further resources on this subject: Web Application Testing [article] LESS CSS Preprocessor [article] Bootstrap 3 and other applications [article]
Read more
  • 0
  • 0
  • 3502
article-image-users-and-permissions-cms-made-simple-16-part-1
Packt
29 Mar 2010
6 min read
Save for later

Users and Permissions with CMS Made Simple 1.6: Part 1

Packt
29 Mar 2010
6 min read
Understanding users and their roles A role is a collection of permissions grouped by general tasks that the user has to be able to perform on the website. An editor may be responsible for creating, reorganizing, and editing pages. A designer does not need to have any permission for page operations, but for creating and editing templates (including module templates). An administrator is a person who has all permissions in the admin console and has unrestricted access to the entire admin console. In CMS Made Simple, three roles are suggested by default—editor, designer, and administrator. The first user created during installation of CMS Made Simple gets the administrator role by default. This user cannot be deleted, deactivated, or removed from the administrator group, as it would mean that there is no administrator for the website at all. You should choose the name of this user and pay attention to the password strength. Members of the administrator group automatically get all the permissions. Let's see how you can create a new user and learn about the minimum features that every user has, independent of his/her role. Time for action – creating a new user In the admin console, click on Users & Groups | Users. Click on Add New User, and fill in the fields, as shown in the following screenshot: Click on Submit. Log out (CMS | Logout) and log in as Peter. The admin console should now look as shown in the following screenshot: What just happened? You have created a new user without assigning him to any group. This user can log in to the admin console. There are only two main menu items that the user can access—CMS and My Preferences. The user can change his name, password, and e-mail address in the MyAccount section. He can define his personal preferences such as language, admin template, set default start page for the admin console, and more. He is also able to manage his personal shortcuts. It is important to define an e-mail address for every user, as this e-mail is used to recover the password, in case the user forgets it. On the login screen of the admin console of CMS Made Simple (when you are not logged in), you will find the link Forgot your password. Click it, enter Peter in the Username field, and click on Submit. An e-mail will be sent to the e-mail address associated with this user. If no e-mail address has been set for this user, then automatic password recovery is not possible. In this case, only the administrator of the website can reset the user's password. The administrator of the website can set any user as inactive by clicking the icon with a green tick in the column Active (Users & Groups | Users). The user account is not deleted, but the user is not able to log in to the admin console until his account has been activated again. If you delete the user, all permissions and personal user preferences will be irrevocably removed. If the user is not assigned to any group, then he is not allowed to do anything other than changing his personal settings. Let's assign the user Peter to the editor group to see what tasks he will be allowed to perform as an editor. Time for action – assigning a user to a group In the admin console, click on Users & Groups | Users. Select the user Peter for edit by clicking on his username. Select the Editor checkbox at the bottom of the screen, as shown in the following screenshot: Click on Submit. Log out (CMS | Logout) and log in as Peter. The admin console should look as shown in the following screenshot: What just happened? You have given the user additional permissions. Now, he can access a new menu item called Content. There are no content pages, but only News that Peter can submit. Let's see what permissions Peter has now. In the admin console, click on Users & Groups | Group Permissions. In the first column, all available permissions are listed. To the right of the permission, there are three columns, one for each group—Admin, Editor, and Designer. You can limit the view to only one group by selecting the group at the top of the table from the drop-down list. Find all selected checkboxes in the Editor column to see what permissions the user assigned to this group gets. You can see that only the Modify News permission is checked for the group. This means that the user can create news articles and edit existing news. When the user creates a new item, the news is automatically saved as a draft, so that only the administrator of the page or a user who has the Approve News For Frontend Display permission can publish the article on the website. Peter is not allowed to delete news articles (permission Delete News Articles) and has no access to the content pages (permission Modify Any Page or Manage All Content). Content permissions As the target goal of CMS Made Simple is content management, the permissions on editing content are the most flexible. You can create and manage as many editors for the website as you like. Moreover, you can create editors with different access levels thus thoroughly separating who is allowed to do what on your website. For example, the permission Manage All Content will give the group full access to all the features that are available with the administrator account in Content | Pages. A user assigned to this group can: Create new pages Reorder and move them through the hierarchy Make pages inactive or prevent them from showing in the navigation Change the default page of the website Delete pages Edit pages including all the information placed in the Options tab To restrict the features mentioned above, you can grant the permission Modify Any Page. This permission allows us to edit the content only. The Options tab is not shown for the users with this permission, so that any information placed in the Options tab cannot be changed. In addition to the last permission, you can allow some fields from the Options tab, so that the editor is able to change the template or mark the page as inactive.
Read more
  • 0
  • 0
  • 3498

article-image-working-drupal-audio-flash-part-2
Packt
16 Oct 2009
6 min read
Save for later

Working with Drupal Audio in Flash (part 2)

Packt
16 Oct 2009
6 min read
Although there are a handful of controls that we can add to this custom audio player, this section will demonstrate the concept by adding the most basic control for multimedia, which is the play and pause buttons. Adding a play and pause button To begin, we will need to first move and resize our title field within our Flash application, so that it can hold more text than "Hello World". We can then make room for some new controls that will be used to control the playback of our audio file. Again, the design of each of these components is subjective, but what is important is the MovieClip instance hierarchy, which will be used within our ActionScript code. Before we begin, we will need to create a new layer in our TIMELINE that will be used to place all AudioPlayer objects. We will call this new layer player: Creating a base button MovieClip Our base button will simply be a rounded rectangle, which we will then add some gradients to, so as to give it depth. We can do this by first creating a rounded rectangle with a vertical linear gradient fill as follows: We can now give it some very cool depth by adding a smaller rounded rectangle within this one, and then orient the same gradient horizontally. An easy way to do this is to copy the original shape and paste it as a new shape. Once we have a new copy of our original rounded rectangle, we can navigate to Modify | Shape | Expand fill, where we will then select Inset, change our Distance to 4px, and then click on OK. After doing this, you will realize how such a simple contrast in gradients can really bring out the shape. After we have our new button shape, we will then need to create a new MovieClip, so that we can reuse this button for both the play and pause buttons. To do this, simply select both the rounded rectangle regions, and then choose Modify | Convert to Symbol in the Flash menu. We are going to call this new movie clip mcButton. Now that we have a base button MovieClip, we can now add the play and pause symbols to complete the play and pause buttons. Adding the PlayButton movie clip The first button that we will create is the play button, which simply consists of a sideways triangle (icon) with the button behind it. To do this, we will first create a new movie clip that will hold the button we just created, and the play icon. We can do this by first clicking on the mcButton movie clip, and then creating a new movie clip from that by selecting Modify | Convert to Symbol. We will call our new movie clip mcPlayButton. What we are really doing here is creating a parent movie clip for our mcButton, which will allow us to add new specific elements. For the play button, we simply want to add a play symbol. To do this, we first want to make sure that we are within the mcPlayButton movie clip by double-clicking on this symbol, so that our breadcrumb at the top of the stage looks as follows: Our next task is to modify our timeline within this movie clip so that we can separate the icon from the button. We can do this by creating two new layers within our timeline, called button (which will hold our button) and icon (which we will create in the next section). We are now ready to start drawing the play icon. Drawing a play icon To draw a Play icon, we will need to first select the PolyStar Tool by clicking and holding on the tool until you can select the PolyStar Tool. This tool will allow us to create a triangle, which we will use for the play icon in our play button. But before we can start drawing, we need to first set up the PolyStar Tool so that it will draw a triangle. We can do this by clicking on the Options button within the Properties tab, which will then bring up a dialog, where we can tell it to draw a polygon with three sides (triangle). After we click on OK, we will then need to change the fill color of this triangle, so that it is visible on our button. We will just change the fill color to Black. We can then move our cursor onto the stage where the button is, and then draw our triangle in the shape of a play button icon. Remember, if you do not like the shape of what you made, you can always tweak it using the transform tool. When we are done, we should have something that resembles a play button! Our next task is to create a pause button. Since we have already created the play button, which is similar to the pause button except for the icon, we can use a handy tool in Flash that will let us duplicate our play button, and then modify our duplication for the pause button icon. Creating a pause button from the play button In order to create our pause button, we will first need to duplicate our play button into a new movie clip, where we can change the icon from play to pause. To do this, we will first direct our attention to the library section of our Flash IDE, which should show us all of the movie clips that we have created so far. We can find the LIBRARY by clicking on the button on the right-hand side of our workspace. To create a duplicate, we will now right-click on the mcPlayButton movie clip, and then select the option Duplicate. This will then bring up a dialog very similar to the dialog when we created new symbols, but this time, we are defining a new movie clip name that will serve as a duplicate for the original one. We will call our new movie clip duplicate mcPauseButton. Now that we have created our duplicate movie clip, the next task is to change the icon within the pause button. We can do this by opening up our mcPauseButton movie clip by double-clicking on that name within the Library. At this point, we can now change the icon of our pause button without running any risk of also modifying the play button (since we created a duplicate). When we are done, we should have a complete pause button. We now have play a nd pause buttons that we will use to link to our AudioPlayer class.
Read more
  • 0
  • 0
  • 3493
Modal Close icon
Modal Close icon