Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-internet-peas-gardening-javascript-part-2
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 2

Anna Gerber
23 Nov 2015
6 min read
In this two-part article series, we're building an internet-connected garden bot using JavaScript. In part one, we set up a Particle Core board, created a Johnny-Five project, and ran a Node.js program to read raw values from a soil moisture sensor. Adding a light sensor Let's connect another sensor. We'll extend our circuit to add a photo resistor to measure the ambient light levels around our plants. Connect one lead of the photo resistor to ground, and the other to analog pin 4, with a 1K pull-down resistor from A4 to the 3.3V pin. The value of the pull-down resistor determines the raw readings from the sensor. We're using a 1K resistor so that the sensor values don't saturate under tropical sun conditions. For plants kept inside a dark room, or in a less sunny climate, a 10K resistor might be a better choice. Read more about how pull-down resistors work with photo resistors at AdaFruit. Now, in our board's ready callback function, we add another sensor instance, this time on pin A4: var lightSensor = new five.Sensor({ pin: "A4", freq: 1000 }); lightSensor.on("data", function() { console.log("Light reading " + this.value); }); For this sensor we are logging the sensor value every second, not just when it changes. We can control how often sensor events are emitted by specifying the number of milliseconds in the freq option when creating the sensor. We can use the threshold config option can be used to control when the change callback occurs. Calibrating the soil sensor The soil sensor uses the electrical resistance between two probes to provide a measure of the moisture content of the soil. We're using a commercial sensor, but you could make your own simply using two pieces of wire spaced about an inch apart (using galvinized wire to avoid rust). Water is a good conductor of electricity, so a low reading means that the soil is moist, while a high amount of resistance indicates that the soil is dry. Because these aren't very sophisticated sensors, the readings will vary from sensor to sensor. In order to do anything meaningful with the readings within our application, we'll need to calibrate our sensor. Calibrate by making a note of the sensor values for very dry soil, wet soil, and in between to get a sense of what the optimal range of values should be. For an imprecise sensor like this, it also helps to map the raw readings onto ranges that can be used to display different messages (e.g. very dry, dry, damp, wet) or trigger different actions. The scale method on the Sensor class can come in handy for this. For example, we could convert the raw readings from 0 - 1023 to a 0 - 5 scale: soilSensor.scale(0, 5).on("change", function() { console.log(this.value); }); However, the raw readings for this sensor range between about 50 (wet) to 500 (fairly dry soil). If we're only interested in when the soil is dry, i.e. when readings are above 300, we could use a conditional statement within our callback function, or use the within method so that the function is only triggered when the values are inside a range of values we care about. soilSensor.within([ 300, 500 ], function() { console.log("Water me!"); }); Our raw soil sensor values will vary depending on the temperature of the soil, so this type of sensor is best for indoor plants that aren't exposed to weather extremes. If you are installing a soil moisture sensor outdoors, consider adding a temperature sensor and then calibrate for values at different temperature ranges. Connecting more sensors We have seven analog and seven digital IO pins on the Particle Core, so we could attach more sensors, perhaps more of the same type to monitor additional planters, or different types of sensors to monitor additional conditions. There are many kinds of environmental sensors available through online marketplaces like AliExpress and ebay. These include sensors for temperature, humidity, dust, gas, water depth, particulate detection etc. Some of these sensors are straightforward analog or digital devices that can be used directly with the Johnny-Five Sensor class, as we have with our soil and light sensors. The Johnny-Five API also includes subclasses like Temperature, with controllers for some widely used sensor components. However, some sensors use protocols like SPI, I2C or OneWire, which are not as well supported by Johnny-Five across all platforms. This is always improving, for example, I2C was added to the Particle-IO plugin in October 2015. Keep an eye on I2C component backpacks which are providing support for additional sensors via secondary microcontrollers. Automation If you are gardening at scale, or going away on extended vacation, you might want more than just monitoring. You might want to automate some basic garden maintenance tasks, like turning on grow lights on overcast days, or controlling a pump to water the plants when the soil moisture level gets low. This can be acheived with relays. For example, we can connect a relay with a daylight bulb to a digital pin, and use it to turn lights on in response to the light readings, but only between certain hours: var five = require("johnny-five"); var Particle = require("particle-io"); var moment = require("moment"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { var lightSensor = new five.Sensor("A4"); var lampRelay = new five.Relay(2); lightSensor.scale(0, 5).on("change", function() { console.log("light reading is " + this.value) var now = moment(); var nightCurfew = now.endOf('day').subtract(4,'h'); var morningCurfew = now.startOf('day').add(6,'h'); if (this.value > 4) { if (!lampRelay.isOn && now.isAfter(morningCurfew) && now.isBefore(nightCurfew)) { lampRelay.on(); } } else { lampRelay.off(); } }); }); And beyond... One of the great things about using Node.js with hardware is that we can extend our apps with modules from npm. We could publish an Atom feed of sensor readings over time, push the data to a web UI using socket-io, build an alert system or create a data visualization layer, or we might build an API to control lights or pumps attached via relays to our board. It's never been easier to program your own internet-connected robot helpers and smart devices using JavaScript. Build more exciting robotics projects with servos and motors – click here to find out how. About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3488

article-image-introduction-wordpress-applications-frontend
Packt
12 Nov 2013
7 min read
Save for later

Introduction to a WordPress application's frontend

Packt
12 Nov 2013
7 min read
(For more resources related to this topic, see here.) Basic file structure of a WordPress theme As WordPress developers, you should have a fairly good idea about the default file structure of WordPress themes. Let's have a brief introduction of the default files before identifying their usage in web applications. Think about a typical web application layout where we have a common header, footer, and content area. In WordPress, the content area is mainly populated by pages or posts. The design and the content for pages are provided through the page.php template, while the content for posts is provided through one of the following templates: index.php archive.php category.php single.php Basically, most of these post-related file types are developed to cater to the typical functionality in blogging systems, and hence can be omitted in the context of web applications. Since custom posts are widely used in application development, we need more focus on templates such as single-{post_type} and archive-{post_type} than category.php, archive.php, and tag.php. Even though default themes contain a number of files for providing default features, only the style.css and index.php files are enough to implement a WordPress theme. Complex web application themes are possible with the standalone index.php file. In normal circumstances, WordPress sites have a blog built on posts, and all the remaining content of the site is provided through pages. When referring to pages, the first thing that comes to our mind is the static content. But WordPress is a fully functional CMS, and hence the page content can be highly dynamic. Therefore, we can provide complex application screens by using various techniques on pages. Let's continue our exploration by understanding the theme file execution hierarchy. Understanding template execution hierarchy WordPress has quite an extensive template execution hierarchy compared to general web application frameworks. However, most of these templates will be of minor importance in the context of web applications. Here, we are going to illustrate the important template files in the context of web applications. The complete template execution hierarchy can be found at: http://hub.packtpub.com/wp-content/uploads/2013/11/Template_Hierarchy.png An example of the template execution hierarchy is as shown in the following diagram: Once the Initial Request is made, WordPress looks for one of the main starting templates as illustrated in the preceding screenshot. It's obvious that most of the starting templates such as front page, comments popup, and index pages are specifically designed for content management systems. In the context of web applications, we need to put more focus into both singular and archive pages, as most of the functionality depends on top of those templates. Let's identify the functionality of the main template files in the context of web applications: Archive pages: These are used to provide summarized listings of data as a grid. Single posts: These are used to provide detailed information about existing data in the system. Singular pages: These are used for any type of dynamic content associated with the application. Generally, we can use pages for form submissions, dynamic data display, and custom layouts. Let's dig deeper into the template execution hierarchy on the Singular Page path as illustrated in the following diagram: Singular Page is divided into two paths that contain posts or pages. Static Page is defined as Custom or Default page templates. In general, we use Default page templates for loading website pages. WordPress looks for a page with the slug or ID before executing the default page.php file. In most scenarios, web application layouts will take the other route of Custom page templates where we create a unique template file inside the theme for each of the layouts and define it as a page template using code comments. We can create a new custom page template by creating a new PHP file inside the theme folder and using the Template Name definition in code comments illustrated as follows: <?php/** Template Name: My Custom Template*/?> To the right of the preceding diagram, we have Single Post Page, which is divided into three paths called Blog Post, Custom Post, and Attachment Post. Both Attachment Posts and Blog Posts are designed for blogs and hence will not be used frequently in web applications. However, the Custom Post template will have a major impact on application layouts. As with Static Page, Custom Post looks for specific post type templates before looking for a default single.php file. The execution hierarchy of an Archive Page is similar in nature to posts, as it looks for post-specific archive pages before reverting to the default archive.php file. Now we have had a brief introduction to the template loading process used by WordPress. In the next section, we are going to look at the template loading process of a typical web development framework to identify the differences. Template execution process of web application frameworks Most stable web application frameworks use a flat and straightforward template execution process compared to the extensive process used by WordPress. These frameworks don't come with built-in templates, and hence each and every template will be generated from scratch. Consider the following diagram of a typical template execution process: In this process, Initial Request always comes to the index.php file, which is similar to the process used by WordPress or any other framework. It then looks for custom routes defined within the framework. It's possible to use custom routes within a WordPress context, even though it's not used generally for websites or blogs. Finally, Initial Request looks for the direct template file located in the templates section of the framework. As you can see, the process of a normal framework has very limited depth and specialized templates. Keep in mind that index.php referred to in the preceding section is the file used as the main starting point of the application, not the template file. In WordPress, we have a specific template file named index.php located inside the themes folder as well. Managing templates in a typical application framework is a relatively easy task when compared to the extensive template hierarchy used by WordPress. In web applications, it's ideal to keep the template hierarchy as flat as possible with specific templates targeted towards each and every screen. In general, WordPress developers tend to add custom functionalities and features by using specific templates within the hierarchy. Having multiple templates for a single screen and identifying the order of execution can be a difficult task in large-scale applications, and hence should be avoided in every possible instance. Web application layout creation techniques As we move into developing web applications, the logic and screens will become complex, resulting in the need of custom templates beyond the conventional ones. There is a wide range of techniques for putting such functionality into the WordPress code. Each of these techniques have their own pros and cons. Choosing the appropriate technique is vital in avoiding potential bottlenecks in large-scale applications. Here is a list of techniques for creating dynamic content within WordPress applications: Static pages with shortcodes Page templates Custom templates with custom routing Summary In this article we learned about basic file structure of the WordPress theme, the template execution hierarchy, and template execution process. We also learned the different techniques of Web application layout creation. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 3463

article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 3429

article-image-user-authentication-codeigniter-17-using-twitter-oauth
Packt
21 May 2010
6 min read
Save for later

User Authentication with Codeigniter 1.7 using Twitter oAuth

Packt
21 May 2010
6 min read
(Read more interesting articles on CodeIgniter 1.7 Professional Development here.) How oAuth works Getting used to how Twitter oAuth works takes a little time. When a user comes to your login page, you send a GET request to Twitter for a set of request codes. These request codes are used to verify the user on the Twitter website. The user then goes through to Twitter to either allow or deny your application access to their account. If they allow the application access, they will be taken back to your application. The URL they get sent to will have an oAuth token appended to the end. This is used in the next step. Back at your application, you then send another GET request for some access codes from Twitter. These access codes are used to verify that the user has come directly from Twitter, and has not tried to spoof an oAuth token in their web browser. Registering a Twitter application Before we write any code, we need to register an application with Twitter. This will give us the two access codes that we need. The first is a consumer key, and the second is a secret key. Both are used to identify our application, so if someone posts a message to Twitter through our application, our application name will show up alongside the user's tweet. To register a new application with Twitter, you need to go to http://www.twitter.com/apps/new. You'll be asked for a photo for your application and other information, such as website URL, callback URL, and a description, among other things. You must select the checkbox that reads Yes, use Twitter for login or you will not be able to authenticate any accounts with your application keys. Once you've filled out the form, you'll be able to see your consumer key and consumer secret code. You'll need these later. Don't worry though; you'll be able to get to these at any time so there's no need to save them to your hard drive. Here's a screenshot of my application: Downloading the oAuth library Before we get to write any of our CodeIgniter wrapper library, we need to download the oAuth PHP library. This allows us to use the oAuth protocol without writing the code from scratch ourselves. You can find the PHP Library on the oAuth website at www.oauth.net/code. Scroll down to PHP and click on the link to download the basic PHP Library; or just visit: http://oauth.googlecode.com/svn/code/php/—the file you need is named OAuth.php. Download this file and save it in the folder system/application/libraries/twitter/—you'll need to create the twitter folder. We're simply going to create a folder for each different protocol so that we can easily distinguish between them. Once you've done that, we'll create our Library file. Create a new file in the system/application/libraries/ folder, called Twitter_oauth.php. This is the file that will contain functions to obtain both request and access tokens from Twitter, and verify the user credentials. The next section of the article will go through the process of creating this library alongside the Controller implementation; this is because the whole process requires work on both the front-end and the back-end. Bear with me, as it could get a little confusing, especially when trying to implement a brand new type of system such as Twitter oAuth. Library base class Let's break things down into small sections. The following code is a version of the base class with all its guts pulled out. It simply loads the oAuth library and sets up a set of variables for us to store certain information in. Below this, I'll go over what each of the variables are there for. <?phprequire_once(APPPATH . 'libraries/twitter/OAuth.php');class Twitter_oauth{ var $consumer; var $token; var $method; var $http_status; var $last_api_call;}?> The first variable you'll see is $consumer—it is used to store the credentials for our application keys and the user tokens as and when we get them. The second variable you see on the list is $token—this is used to store the user credentials. A new instance of the oAuth class OAuthConsumer is created and stored in this variable. Thirdly, you'll see the variable $method—this is used to store the oAuth Signature Method (the way we sign our oAuth calls). Finally, the last two variables, $http_status and $last_api_call, are used to store the last HTTP Status Code and the URL of the last API call, respectively. These two variables are used solely for debugging purposes. Controller base class The Controller is the main area where we'll be working, so it is crucial that we design the best way to use it so that we don't have to repeat our code. Therefore, we're going to have our consumer key and consumer secret key in the Controller. Take a look at the Base of our class to get a better idea of what I mean. <?phpsession_start();class Twitter extends Controller{ var $data; function Twitter() { parent::Controller(); $this->data['consumer_key'] = ""; $this->data['consumer_secret'] = "";} The global variable $data will be used to store our consumer key and consumer secret. These must not be left empty and will be provided to you by Twitter when creating your application. We use these when instantiating the Library class, which is why we need it available throughout the Controller instead of just in one function. We also allow for sessions to be used in the Controller, as we want to temporarily store some of the data that we get from Twitter in a session. We could use the CodeIgniter Session Library, but it doesn't offer us as much flexibility as native PHP sessions; this is because with native sessions we don't need to rely on cookies and a database, so we'll stick with the native sessions for this Controller.
Read more
  • 0
  • 0
  • 3370

article-image-taking-control-reactivity-inputs-and-outputs
Packt
23 Oct 2013
7 min read
Save for later

Taking Control of Reactivity, Inputs, and Outputs

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Showing and hiding elements of the UI We'll start easy with a simple function that you are certainly going to need if you build even a moderately complex application. Those of you who have been doing extra credit exercises and/or experimenting with your own applications will probably have already wished for this or, indeed, have already found it. conditionalPanel() allows you to show/hide UI elements based on other selections within the UI. The function takes a condition (in JavaScript, but the form and syntax will be familiar from many languages) and a UI element, and displays the UI only when the condition is true. This is actually used a couple of times in the advanced GA application and indeed in all the applications I've ever written of even moderate complexity. The following is a simpler example (from ui.R, of course, in the first section, within sidebarPanel()), which allows users who request a smoothing line to decide what type they want: conditionalPanel(condition = "input.smoother == true",selectInput("linearModel", "Linear or smoothed",list("lm", "loess"))) As you can see, the condition appears very R/Shiny-like, except with the "." operator familiar to JavaScript users in place of "$", and with "true" in lower case. This is a very simple but powerful way of making sure that your UI is not cluttered with irrelevant material. Giving names to tabPanel elements In order to further streamline the UI, we're going to hide the hour selector when the monthly graph is displayed and the date selector when the hourly graph is displayed. The difference is illustrated in the following screenshot with side-by-side pictures, hourly figures UI on the left-hand side and monthly figures on the right-hand side: In order to do this, we're going to have to first give the tabs of the tabbed output names. This is done as follows (with the new code in bold): tabsetPanel(id ="theTabs",tabPanel("Summary", textOutput("textDisplay"),value = "summary"),tabPanel("Monthly figures",plotOutput("monthGraph"), value = "monthly"),tabPanel("Hourly figures",plotOutput("hourGraph"), value = "hourly")) As you can see, the whole panel is given an ID (theTabs), and then each tabPanel is also given a name (summary, monthly, and hourly). They are referred to in the server.R file very simply as input$theTabs. Let's have a quick look at a chunk of code in server.R that references the tab names; this code makes sure that we subset based on date only when the date selector is actually visible, and by hour only when the hour selector is actually visible. Our function to calculate and pass data now looks like the following (new code again bolded): passData <- reactive({if(input$theTabs != "hourly"){analytics <- analytics[analytics$Date %in%seq.Date(input$dateRange[1], input$dateRange[2],by = "days"),]}if(input$theTabs != "monthly"){analytics <- analytics[analytics$Hour %in%as.numeric(input$minimumTime) :as.numeric(input$maximumTime),]}analytics <- analytics[analytics$Domain %in%unlist(input$domainShow),]analytics}) As you can see, subsetting by month is carried out only when the date display is visible (that is, when the hourly tab is not shown), and vice versa. Finally, we can make our changes to ui.R to remove parts of the UI based on tab selection: conditionalPanel(condition = "input.theTabs != 'hourly'",dateRangeInput(inputId = "dateRange",label = "Date range",start = "2013-04-01",max = Sys.Date())),conditionalPanel(condition = "input.theTabs != 'monthly'",sliderInput(inputId = "minimumTime",label = "Hours of interest- minimum",min = 0,max = 23,value = 0,step = 1),sliderInput(inputId = "maximumTime",label = "Hours of interest- maximum",min = 0,max = 23,value = 23,step = 1)) Note the use in the latter example of two UI elements within the same conditionalPanel() call; it is worth noting that it helps you keep your code clean and easy to debug. Reactive user interfaces Another trick you will definitely want up your sleeve at some point is a reactive user interface. This enables you to change your UI (for example, the number or content of radio buttons) based on reactive functions. For example, consider an application that I wrote related to survey responses across a broad range of health services in different areas. The services are related to each other in quite a complex hierarchy, and over time, different areas and services respond (or cease to exist, or merge, or change their name...), which means that for each time period the user might be interested in, there would be a totally different set of areas and services. The only sensible solution to this problem is to have the user tell you which area and date range they are interested in and then give them back the correct list of services that have survey responses within that area and date range. The example we're going to look at is a little simpler than this, just to keep from getting bogged down in too much detail, but the principle is exactly the same and you should not find this idea too difficult to adapt to your own UI. We are going to imagine that your users are interested in the individual domains from which people are accessing the site, rather than just have them lumped together as the NHS domain and all others. To this end, we will have a combo box with each individual domain listed. This combo box is likely to contain a very high number of domains across the whole time range, so we will let users constrain the data by date and only have the domains that feature in that range return. Not the most realistic example, but it will illustrate the principle for our purposes. Reactive user interface example – server.R The big difference is that instead of writing your UI definition in your ui.R file, you place it in server.R, and wrap it in renderUI(). Then all you do is point to it from your ui.R file. Let's have a look at the relevant bit of the server.R file: output$reacDomains <- renderUI({domainList = unique(as.character(passData()$networkDomain))selectInput("subDomains", "Choose subdomain", domainList)}) The first line takes the reactive dataset that contains only the data between the dates selected by the user and gives all the unique values of domains within it. The second line is a widget type we have not used yet which generates a combo box. The usual id and label arguments are given, followed by the values that the combo box can take. This is taken from the variable defined in the first line. Reactive user interface example – ui.R The ui.R file merely needs to point to the reactive definition as shown in the following line of code (just add it in to the list of widgets within sidebarPanel()): uiOutput("reacDomains") You can now point to the value of the widget in the usual way, as input$subDomains. Note that you do not use the name as defined in the call to renderUI(), that is, reacDomains, but rather the name as defined within it, that is, subDomains. Summary It's a relatively small but powerful toolbox with which you can build a vast array of useful and intuitive applications with comparatively little effort. This article looked at fine-tuning the UI using conditionalPanel() and observe(), and changing our UI reactively. Resources for Article: Further resources on this subject: Fine Tune the View layer of your Fusion Web Application [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article]
Read more
  • 0
  • 0
  • 3356

article-image-building-custom-version-jquery
Packt
04 Apr 2013
9 min read
Save for later

Building a Custom Version of jQuery

Packt
04 Apr 2013
9 min read
(For more resources related to this topic, see here.) Why Is It Awesome? While it's fairly common for someone to say that they use jQuery in every site they build (this is usually the case for me), I would expect it much rarer for someone to say that they use the exact same jQuery methods in every project, or that they use a very large selection of the available methods and functionality that it offers. The need to reduce file size as aggressively as possible to cater for the mobile space, and the rise of micro-frameworks such as Zepto for example, which delivers a lot of jQuery functionality at a much-reduced size, have pushed jQuery to provide a way of slimming down. As of jQuery 1.8, we can now use the official jQuery build tool to build our own custom version of the library, allowing us to minimize the size of the library by choosing only the functionality we require. For more information on Zepto, see http://zeptojs.com/. Your Hotshot Objectives To successfully conclude this project we'll need to complete the following tasks: Installing Git and Make Installing Node.js Installing Grunt.js Configuring the environment Building a custom jQuery Running unit tests with QUnit Mission Checklist We'll be using Node.js to run the build tool, so you should download a copy of this now. The Node website (http://nodejs.org/download/) has an installer for both 64 and 32-bit versions of Windows, as well as a Mac OS X installer. It also features binaries for Mac OS X, Linux, and SunOS. Download and install the appropriate version for your operating system. The official build tool for jQuery (although it can do much more besides build jQuery) is Grunt.js, written by Ben Alman. We don't need to download this as it's installed via the Node Package Manager (NPM). We'll look at this process in detail later in the project. For more information on Grunt.js, visit the official site at http://gruntjs.com. First of all we need to set up a local working area. We can create a folder in our root project folder called jquery-source. This is where we'll store the jQuery source when we clone the jQuery Github repository, and also where Grunt will build the final version of jQuery. Installing Git and Make The first thing we need to install is Git, which we'll need in order to clone the jQuery source from the Github repository to our own computer so that we can work with the source files. We also need something called Make, but we only need to actually install this on Mac platforms because it gets installed automatically on Windows when Git is installed. As the file we'll create will be for our own use only and we don't want to contribute to jQuery by pushing code back to the repository, we don't need to worry about having an account set up on Github. Prepare for Lift Off First we'll need to download the relevant installers for both Git and Make. Different applications are required depending on whether you are developing on Mac or Windows platforms. Mac developers Mac users can visit http://git-scm.com/download/mac for Git. Next we can install Make. Mac developers can get this by installing XCode. This can be downloaded from https://developer.apple.com/xcode/. Windows developers Windows users can install msysgit, which can be obtained by visiting https://code.google.com/p/msysgit/downloads/detail?name=msysGit-fullinstall-1.8.0-preview20121022.exe. Engage Thrusters Once the installers have downloaded, run them to install the applications. The defaults selected by the installers should be fine for the purposes of this mission. First we should install Git (or msysgit on Windows). Mac developers Mac developers simply need to run the installer for Git to install it to the system. Once this is complete we can then install XCode. All we need to do is run the installer and Make, along with some other tools, will be installed and ready. Windows developers Once the full installer for msysgit has finished, you should be left with a Command Line Interface (CLI) window (entitled MINGW32) indicating that everything is ready for you to hack. However, before we can hack, we need to compile Git. To do this we need to run a file called initialize.sh. In the MINGW32 window, cd into the msysgit directory. If you allowed this to install to the default location, you can use the following command: cd C:msysgitmsysgitsharemsysGit Once we are in the correct directory, we can then run initialize.sh in the CLI. Like the installation, this process can take some time, so be patient and wait for the CLI to return a flashing cursor at the $ character. An Internet connection is required to compile Git in this way. Windows developers will need to ensure that the Git.exe and MINGW resources can be reached via the system's PATH variable. This can be updated by going to Control Panel | System | Advanced system settings | Environment variables. In the bottom section of the dialog box, double-click on Path and add the following two paths to the git.exe file in the bin folder, which is itself in a directory inside the msysgit folder wherever you chose to install it: ;C:msysgitmsysgitbin; C:msysgitmsysgitmingwbin; Update the path with caution! You must ensure that the path to Git.exe is separated from the rest of the Path variables with a semicolon. If the path does not end with a semicolon before adding the path to Git.exe, make sure you add one. Incorrectly updating your path variables can result in system instability and/or loss of data. I have shown a semicolon at the start of the previous code sample to illustrate this. Once the path has been updated, we should then be able to use a regular command prompt to run Git commands. Post-installation tasks In a terminal or Windows Command Prompt (I'll refer to both simply as the CLI from this point on for conciseness) window, we should first cd into the jquery-source folder we created at the start of the project. Depending on where your local development folder is, this command will look something like the following: cd c:jquery-hotshotsjquery-source To clone the jQuery repository, enter the following command in the CLI: git clone git://github.com/jquery/jquery.git Again, we should see some activity on the CLI before it returns to a flashing cursor to indicate that the process is complete. Depending on the platform you are developing on, you should see something like the following screenshot: Objective Complete — Mini Debriefing We installed Git and then used it to clone the jQuery Github repository in to this directory in order to get a fresh version of the jQuery source. If you're used to SVN, cloning a repository is conceptually the same as checking out a repository. Again, the syntax of these commands is very similar on Mac and Windows systems, but notice how we need to escape the backslashes in the path when using Windows. Once this is complete, we should end up with a new directory inside our jquery-source directory called jquery. If we go into this directory, there are some more directories including: build: This directory is used by the build tool to build jQuery speed: This directory contains benchmarking tests src: This directory contains all of the individual source files that are compiled together to make jQuery Test: This directory contains all of the unit tests for jQuery It also has a range of various files, including: Licensing and documentation, including jQuery's authors and a guide to contributing to the project Git-specific files such as .gitignore and .gitmodules Grunt-specific files such as Gruntfile.js JSHint for testing and code-quality purposes Make is not something we need to use directly, but Grunt will use it when we build the jQuery source, so it needs to be present on our system. Installing Node.js Node.js is a platform for running server-side applications built with JavaScript. It is trivial to create a web-server instance, for example, that receives and responds to HTTP requests using callback functions. Server-side JS isn't exactly the same as its more familiar client-side counterpart, but you'll find a lot of similarities in the same comfortable syntax that you know and love. We won't actually be writing any server-side JavaScript in this project – all we need Node for is to run the Grunt.js build tool. Prepare for Lift Off To get the appropriate installer for your platform, visit the Node.js website at http://nodejs.org and hit the download button. The correct installer for your platform, if supported, should be auto-detected. Engage Thrusters Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. On Windows or Mac platforms, run the installer and it will guide you through the installation process. I have found that the default options are fine in most cases. As before, we also need to update the Path variable to include Node and Node's package manager NPM. The paths to these directories will differ between platforms. Mac Mac developers should check that the $PATH variable contains a reference to usr/local/bin. I found that this was already in my $PATH, but if you do find that it's not present, you should add it. For more information on updating your $PATH variable, see http://www.tech-recipes.com/rx/2621/os_x_change_path_environment_variable/. Windows Windows developers will need to update the Path variable, in the same way as before, with the following paths: C:Program Filesnodejs; C:UsersDesktopAppDataRoamingnpm; Windows developers may find that the Path variable already contains an entry for Node so may just need to add the path to NPM. Objective Complete — Mini Debriefing Once Node is installed, we will need to use a CLI to interact with it. To verify Node has installed correctly, type the following command into the CLI: node -v The CLI should report the version in use, as follows: We can test NPM in the same way by running the following command: npm -v
Read more
  • 0
  • 0
  • 3322
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-internet-peas-gardening-javascript-part-1
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 1

Anna Gerber
23 Nov 2015
6 min read
Who wouldn't want an army of robots to help out around the home and garden? It's not science fiction: Robots are devices that sense and respond to the world around us, so with some off-the-shelf hardware, and the power of the Johnny-Five JavaScript Robotics framework, we can build and program simple "robots" to automate every day tasks. In this two part article series, we'll build an internet-connected device for monitoring plants. Bill of materials You'll need these parts to build this project Part Source Particle Core (or Photon) Particle 3xAA Battery holder e.g. with micro USB connector from DF Robot Jumper wires Any electronics supplier e.g. Sparkfun Solderless breadboard Any electronics supplier e.g. Sparkfun Photo resistor Any electronics supplier e.g. Sparkfun 1K resistor Any electronics supplier e.g. Sparkfun Soil moisture sensor e.g. Sparkfun Plants   Particle (formerly known as Spark) is a platform for developing devices for the Internet of Things. The Particle Core was their first generation Wifi development board, and has since been supeceded by the Photon. Johnny-Five supports both of these boards, as well as Arduino, BeagleBone Black, Raspberry Pi, Edison, Galileo, Electric Imp, Tessel and many other device platforms, so you can use the framework with your device of choice. The Platform Support page lists the features currently supported on each device. Any device with Analog Read support is suitable for this project. Setting up the Particle board Make sure you have a recent version of Node.js installed. We're using npm (Node Package Manager) to install the tools and libraries required for this project. Install the Particle command line tools with npm (via the Terminal on Mac, or Command Prompt on Windows): npm install -g particle-cli Particle boards need to be registered with the Particle Cloud service, and you must also configure your device to connect to your wireless network. So the first thing you'll need to do is connect it to your computer via USB and run the setup program. See the Particle Setup docs. The LED on the Particle Core should be blinking blue when you plug it in for the first time (if not, press and hold the mode button). Sign up for a Particle Account and then follow the prompts to setup your device via the Particle website, or if you prefer you can run the setup program from the command line. You'll be prompted to sign in and then to enter your Wifi SSID and password: particle setup After setup is complete, the Particle Core can be disconnected from your computer and powered by batteries or a separate USB power supply - we will connect to the board wirelessly from now on. Flashing the board We also need to flash the board with the Voodoospark firmware. Use the CLI tool to sign in to the Particle Cloud and list your devices to find out the ID of your board: particle cloud login particle list Download the firmware.cpp file and use the flash command to write the Voodoospark firmware to your device: particle cloud flash <Your Device ID> voodoospark.cpp See the Voodoospark Getting Started page for more details. You should see the following message: Flash device OK: Update started The LED on the board will flash magenta. This will take about a minute, and will change back to green when the board is ready to use. Creating a Johnny-Five project We'll be installing a few dependencies from npm, so to help manage these, we'll set up our project as an npm package. Run the init command, filling in the project details at the prompts. npm init After init has completed, you'll have a package.json file with the metadata that you entered about your project. Dependencies for the project can also be saved to this file. We'll use the --save command line argument to npm when installing packages to persist dependencies to our package.json file. We'll need the Johnny-Five npm module as well as the Particle-IO IO Plugin for Johnny-Five. npm install johnny-five --save npm install particle-io --save Johnny-Five uses the Firmata protocol to communicate with Arduino-based devices. IO Plugins provide Firmata compatible interfaces to allow Johnny-Five to communicate with non-Arduino-based devices. The Particle-IO Plugin allows you to run Node.js applications on your computer that communicate with the Particle board over Wifi, so that you can read from sensors or control components that are connected to the board. When you connect to your board, you'll need to specify your Device ID and your Particle API Access Token. You can look up your access token under Settings in the Particle IDE. It's a good idea to copy these to environment variables rather than hardcoding them into your programs. If you are on Mac or Linux, you can create a file called .particlerc then run source .particlerc: export PARTICLE_TOKEN=<Your Token Here> export PARTICLE_DEVICE_ID=<Your Device ID Here> Reading from a sensor Now we're ready to get our hands dirty! Let's confirm that we can communicate with our Particle board using Johnny-Five, by taking a reading from our soil moisture sensor. Using jumper wires, connect one pin on the soil sensor to pin A0 (analog pin 0) and the other to GND (ground). The probes go into the soil in your plant pot. Create a JavaScript file named sensor.js using your preferred text editor or IDE. We use require statements to include the Johnny-Five module and the Particle-IO plugin. We're creating an instance of the Particle IO plugin (with our token and deviceId read from our environment variables) and providing this as the io config option when creating our Board object. var five = require("johnny-five"); var Particle = require("particle-io"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { console.log("CONNECTED"); var soilSensor = new five.Sensor("A0"); soilSensor.on("change", function() { console.log(this.value); }); }); After the board is ready, we create a Sensor object to monitor changes on pin A0, and then print the value from the sensor to the Node.js console whenever it changes. Run the program using Node.js: node sensor.js Try pulling the sensor out of the soil or watering your plant to make the sensor reading change. See the Sensor API for more methods that you can use with Sensors. You can hit control-C to end the program. In the next installment we'll connect our light sensor and extend our Node.js application to monitor our plant's environment. Continue reading now! About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3297

Packt
22 Oct 2009
6 min read
Save for later

Working with Rails – Setting up and connecting to a database

Packt
22 Oct 2009
6 min read
In this article, authors Elliot Smith and Rob Nichols explain the setup of a new Rails application and how to integrate it with other data sources. Specifically, this article focuses on turning the abstract data structure for Intranet into a Rails application. This requires a variety of concepts and tools, namely: The structure of a Rails application. Initializing an application using the rails command. Associating Rails with a database. The built-in utility scripts included with each application. Using migrations to maintain a database. Building models and validating them. Using the Rails console to manually test models. Automated testing of models using Test::Unit. Hosting a project in a Subversion repository. Importing data into the application using scripts. In this article, we'll focus on the first 3 concepts. The World According to Rails To understand how Rails applications work, it helps to get under its skin: find out what motivated its development, and the philosophy behind it. The first thing to grasp is that Rails is often referred to as opinionated software (see http://www.oreillynet.com/pub/a/network/2005/08/30/ruby-rails-davidheinemeier-hansson.html). It encapsulates an approach to web application development centered on good practice, emphasizing automation of common tasks and minimization of effort. Rails helps developers make good choices, and even removes the need to make choices where they are just distractions. How is this possible? It boils down to a couple of things: Use of a default design for applications-By making it easy to build applications using the Model-View-Controller (MVC) architecture, Rails encourages separation of an application's database layer, its control logic, and the user interface. Rails' implementation of the MVC pattern is the key to understanding the framework as a whole. Use of conventions instead of explicit configuration-By encouraging use of a standard directory layout and file naming conventions, Rails reduces the need to configure relationships between the elements of the MVC pattern. Code generators are used to great effect in Rails, making it easy to follow the conventions. We'll see each of these features in more detail in the next two sections. Model-View-Controller Architecture The original aim of the MVC pattern was to provide architecture to bridge the gap between human and computer models of data. Over time, MVC has evolved into an architecture which decouples components of an application, so that one component (e.g. the control logic) can be changed with minimal impact on the other components (e.g. the interface). Explaining MVC makes more sense in the context of "traditional" web applications. When using languages such as PHP or ASP, it is tempting to mix application logic with database-access code and HTML generation. (Ruby, itself, can also be used in this way to write CGI scripts.) To highlight how a traditional web application works, here's a pseudo-code example:     # define a file to save email addresses into    email_addresses_file = 'emails.txt'    # get the email_address variable from the querystring    email_address = querystring['email_address']    # CONTROLLER: switch action of the script based on whether    # email address has been supplied    if '' == email_address        # VIEW: generate HTML form to accept user input which        # posts back to this script        content = "<form method='post' action='" + self + "'>        <p>Email address: <input type='text' name='email_address'/></p>        <p><input type='submit' value='Save'/></p>        </form>"    else        # VIEW: generate HTML to confirm data submission        content = "<p>Your email address is " + email_address + "</p>"        # MODEL: persist data        if not file_exists(email_addresses_file)            create_file(email_addresses_file)        end if        write_to_file(email_addresses_file, email_address)    end if    print "<html><head><title>Email manager</title></head>    <body>" + content + "</body></html>" The highlighted comments indicate how the code can be mapped to elements of the MVC architecture: Model components handle an application's state. Typically, the model does this by putting data into some kind of a long-term storage (e.g. database, filesystem). Models also encapsulate business logic, such as data validation rules. Rails uses ActiveRecord as its model layer, enabling data handling in a variety of relational database back-ends.In the example script, the model role is performed by the section of code which saves the email address into a text file. View components generate the user interface (e.g. HTML, XML). Rails uses ActionView (part of the ActionPack library) to manage generation of views.The example script has sections of code to create an appropriate view, generating either an HTML form for the user to enter their email address, or a confirmation message acknowledging their input. The Controller orchestrates between the user and the model, retrieving data from the user's request and manipulating the model in response (e.g. creating objects, populating them with data, saving them to a database). In the case of Rails, ActionController (another part of the ActionPack library) is used to implement controllers. These controllers handle all requests from the user, talk to the model, and generate appropriate views.In the example script, the code which retrieves the submitted email address, is performing the controller role. A conditional statement is used to generate an appropriate response, dependent on whether an email address was supplied or not. In a traditional web application, the three broad classes of behavior described above are frequently mixed together. In a Rails application, these behaviors are separated out, so that a single layer of the application (the model, view, or controller) can be altered with minimal impact on the other layers. This gives a Rails application the right mix of modularity, fl exibility, and power. Next, we'll see another piece of what makes Rails so powerful: the idea of using conventions to create associations between models, views, and controllers. Once you can see how this works, the Rails implementation of MVC makes more sense: we'll return to that topic in the section Rails and MVC.
Read more
  • 0
  • 0
  • 3291

article-image-understanding-backbone
Packt
02 Sep 2013
12 min read
Save for later

Understanding Backbone

Packt
02 Sep 2013
12 min read
(For more resources related to this topic, see here.) Backbone.js is a lightweight JavaScript framework that is based on the Model-View-Controller (MVC) pattern and allows developers to create single-page web applications. With Backbone, it is possible to update a web page quickly using the REST approach with a minimal amount of data transferred between a client and a server. Backbone.js is becoming more popular day by day and is being used on a large scale for web applications and IT startups; some of them are as follows: Groupon Now!: The team decided that their first product would be AJAX-heavy but should still be linkable and shareable. Though they were completely new to Backbone, they found that its learning curve was incredibly quick, so they were able to deliver the working product in just two weeks. Foursquare: This used the Backbone.js library to create model classes for the entities in foursquare (for example, venues, check-ins, and users). They found that Backbone's model classes provide a simple and light-weight mechanism to capture an object's data and state, complete with the semantics of a classical inheritance. LinkedIn mobile: This used Backbone.js to create its next-generation HTML5 mobile web app. Backbone made it easy to keep the app modular, organized, and extensible, so it was possible to program the complexities of LinkedIn's user experience. Moreover, they are using the same code base in their mobile applications for iOS and Android platforms. WordPress.com: This is a SaaS version of Wordpress and uses Backbone.js models, collections, and views in its notification system, and is integrating Backbone.js into the Stats tab and into other features throughout the home page. Airbnb: This is a community marketplace for users to list, discover, and book unique spaces around the world. Its development team has used Backbone in many latest products. Recently, they rebuilt a mobile website with Backbone.js and Node.js tied together with a library named Rendr. You can visit the following links to get acquainted with other usage examples of Backbone.js: http://backbonejs.org/#examples Backbone.js was started by Jeremy Ashkenas from DocumentCloud in 2010 and is now being used and improved by lots of developers all over the world using Git, the distributed version control system. In this article, we are going to provide some practical examples of how to use Backbone.js, and we will structure a design for a program named Billing Application by following the MVC and Backbone pattern. Reading this article is especially useful if you are new to developing with Backbone.js. Designing an application with the MVC pattern MVC is a design pattern that is widely used in user-facing software, such as web applications. It is intended for splitting data and representing it in a way that makes it convenient for user interaction. To understand what it does, understand the following: Model: This contains data and provides business logic used to run the application View: This presents the model to the user Controller: This reacts to user input by updating the model and the view There could be some differences in the MVC implementation, but in general it conforms to the following scheme: Worldwide practice shows that the use of the MVC pattern provides various benefits to the developer: Following the separation of the concerned paradigm, which splits an application into independent parts, it is easier to modify or replace It achieves code reusability by rendering a model in different views without the need to implement model functionality in each view It requires less training and has a quicker startup time for the new developers within an organization To have a better understanding of the MVC pattern, we are going to design a Billing Application. We will refer to this design throughout the book when we are learning specific topics. Our Billing Application will allow users to generate invoices, manage them, and send them to clients. According to the worldwide practice, the invoice should contain a reference number, date, information about the buyer and seller, bank account details, a list of provided products or services, and an invoice sum. Let's have a look at the following screenshot to understand how an invoice appears: How to do it... Let's follow the ensuing steps to design an MVC structure for the Billing Application: Let's write down a list of functional requirements for this application. We assume that the end user may want to be able to do the following: Generate an invoice E-mail the invoice to the buyer Print the invoice See a list of existing invoices Manage invoices (create, read, update, and delete) Update an invoice status (draft, issued, paid, and canceled) View a yearly income graph and other reports To simplify the process of creating multiple invoices, the user may want to manage information about buyers and his personal details in the specific part of the application before he/she creates an invoice. So, our application should provide additional functionalities to the end user, such as the following: The ability to see a list of buyers and use it when generating an invoice The ability to manage buyers (create, read, update, and delete) The ability to see a list of bank accounts and use it when generating an invoice The ability to manage his/her own bank accounts (create, read, update, and delete) The ability to edit personal details and use them when generating an invoice Of course, we may want to have more functions, but this is enough for demonstrating how to design an application using the MVC pattern. Next, we architect an application using the MVC pattern. After we have defined the features of our application, we need to understand what is more related to the model (business logic) and what is more related to the view (presentation). Let's split the functionality into several parts. Then, we learn how to define models. Models present data and provide data-specific business logic. Models can be related to each other. In our case, they are as follows: InvoiceModel InvoiceItemModel BuyerModel SellerModel BankAccountModel Then, will define collections of models. Our application allows users to operate on a number of models, so they need to be organized into a special iterable object named Collection. We need the following collections: InvoiceCollection InvoiceItemCollection BuyerCollection BankAccountCollection Next, we define views. Views present a model or a collection to the application user. A single model or collection can be rendered to be used by multiple views. The views that we need in our application are as follows: EditInvoiceFormView InvoicePageView InvoiceListView PrintInvoicePageView EmailInvoiceFormView YearlyIncomeGraphView EditBuyerFormView BuyerPageView BuyerListView EditBankAccountFormView BankAccountPageView BankAccountListView EditSellerInfoFormView ViewSellectInfoPageView ConfirmationDialogView Finally, we define a controller. A controller allows users to interact with an application. In MVC, each view can have a different controller that is used to do following: Map a URL to a specific view Fetch models from a server Show and hide views Handle user input Defining business logic with models and collections Now, it is time to design business logic for the Billing Application using the MVC and OOP approaches. In this recipe, we are going to define an internal structure for our application with model and collection objects. Although a model represents a single object, a collection is a set of models that can be iterated, filtered, and sorted. Relations between models and collections in the Billing Application conform to the following scheme: How to do it... For each model, we are going to create two tables: one for properties and another for methods: We define BuyerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   Then, we define SellerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   taxDetails Text Yes   After this, we define BankAccountModel properties. Name Type Required Unique id Integer Yes Yes beneficiary Text Yes   beneficiaryAccount Text Yes   bank Text No   SWIFT Text Yes   specialInstructions Text No   We define InvoiceItemModel properties. Name Arguments Return Type Unique calculateAmount - Decimal   Next, we define InvoiceItemModel methods. We don't need to store the item amount in the model, because it always depends on the price and the quantity, so it can be calculated. Name Type Required Unique id Integer Yes Yes deliveryDate Date Yes   description Text Yes   price Decimal Yes   quantity Decimal Yes   Now, we define InvoiceModel properties. Name Type Required Unique id Integer Yes Yes referenceNumber Text Yes   date Date Yes   bankAccount Reference Yes   items Collection Yes   comments Text No   status Integer Yes   We define InvoiceModel methods. The invoice amount can easily be calculated as the sum of invoice item amounts. Name Arguments Return Type Unique calculateAmount   Decimal   Finally, we define collections. In our case, they are InvoiceCollection, InvoiceItemCollection, BuyerCollection, and BankAccountCollection. They are used to store models of an appropriate type and provide some methods to add/remove models to/from the collections. How it works... Models in Backbone.js are implemented by extending Backbone.Model, and collections are made by extending Backbone.Collection. To implement relations between models and collections, we can use special Backbone extensions. To learn more about object properties, methods, and OOP programming in JavaScript, you can refer to the following resource: https://developer.mozilla.org/en-US/docs/JavaScript/Introduction_to_Object-Oriented_JavaScript Modeling an application's behavior with views and a router Unlike traditional MVC frameworks, Backbone does not provide any distinct object that implements controller functionality. Instead, the controller is diffused between Backbone.Router and Backbone. View and the following is done: A router handles URL changes and delegates application flow to a view. Typically, the router fetches a model from the storage asynchronously. When the model is fetched, it triggers a view update. A view listens to DOM events and either updates a model or navigates an application through a router. The following diagram shows a typical workflow in a Backbone application: How to do it... Let's follow the ensuing steps to understand how to define basic views and a router in our application: First, we need to create wireframes for an application. Let's draw a couple of wireframes in this recipe: The Edit Invoice page allows users to select a buyer, to select the seller's bank account from the lists, to enter the invoice's date and a reference number, and to build a table of shipped products and services. The Preview Invoice page shows how the final invoice will be seen by a buyer. This display should render all the information we have entered in the Edit Invoice form. Buyer and seller information can be looked up in the application storage. The user has the option to either go back to the Edit display or save this invoice. Then, we will define view objects. According to the previous wireframes, we need to have two main views: EditInvoiceFormView and PreviewInvoicePageView. These views will operate with InvoiceModel; it refers to other objects, such as BankAccountModel and InvoiceItemCollection. Now, we will split views into subviews. For each item in the Products or Services table, we may want to recalculate the Amount field depending on what the user enters in the Price and Quantity fields. The first way to do this is to re-render the entire view when the user changes the value in the table; however, it is not an efficient way, and it takes a significant amount of computer power to do this. We don't need to re-render the entire view if we want to update a small part of it. It is better to split the big view into different, independent pieces, such as subviews, that are able to render only a specific part of the big view. In our case, we can have the following views: As we can see, EditInvoiceItemTableView and PreviewInvoiceItemTableView render InvoiceItemCollection with the help of the additional views EditInvoiceItemView and PreviewInvoiceItemView that render InvoiceItemModel. Such separation allows us to re-render an item inside a collection when it is changed. Finally, we will define URL paths that will be associated with a corresponding view. In our case, we can have several URLs to show different views, for example: /invoice/add /invoice/:id/edit /invoice/:id/preview Here, we assume that the Edit Invoice view can be used for either creating a new invoice or editing an existing one. In the router implementation, we can load this view and show it on specific URLs. How it works... The Backbone.View object can be extended to create our own view that will render model data. In a view, we can define handlers to user actions, such as data input and keyboard or mouse events. In the application, we can have a single Backbone.Router object that allows users to navigate through an application by changing the URL in the address bar of the browser. The router object contains a list of available URLs and callbacks. In a callback function, we can trigger the rendering of a specific view associated with a URL. If we want a user to be able to jump from one view to another, we may want him/her to either click on regular HTML links associated with a view or navigate to an application programmatically.
Read more
  • 0
  • 0
  • 3281

article-image-creating-different-font-files-and-using-web-fonts
Packt
16 Sep 2013
12 min read
Save for later

Creating different font files and using web fonts

Packt
16 Sep 2013
12 min read
(For more resources related to this topic, see here.) Creating different font files In this recipe, we will learn how to create or get these fonts and how to generate the different formats for use in different browsers (Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font) is explained in this recipe. Getting ready To get the original file of the font created during this recipe in addition to the generated font formats and the full source code of the project FontCreation; please refer to the receipe2 project folder. How to do it... The following steps are preformed for creating different font files: Firstly, we will get an original TTF font file. There are two different ways to get fonts: The first method is by downloading one from specialized websites. Both free and commercial solutions can be found with a wide variety of beautiful fonts. The following are a few sites for downloading free fonts: Google fonts, Font squirrel, Dafont, ffonts, Jokal, fontzone, STIX, Fontex, and so on. Here are a few sites for downloading commercial fonts: Typekit, Font Deck, Font Spring, and so on. We will consider the example of Fontex, as shown in the following screenshot. There are a variety of free fonts. You can visit the website at http://www.fontex.org/. The second method is by creating your own font and then generating a TIFF file format. There are a lot of font generators on the Web. We can find online generators or follow the professionals by scanning handwritten typography and finally import it to Adobe Illustrator to change it into vector based letters or symbols. For newbies, I recommend trying Fontstruct (http://fontstruct.com). It is a WYSIWYG flash editor that will help you create your first font file, as shown in the following screenshot: As you can see, we were trying to create the S letter using a grid and some different forms. After completing the font creation, we can now preview it rather than download the TTF file. The file is in the receipe2 project folder. The following screenshot is an example of a font we have created on the run: Now we have to generate the rest of file formats in order to ensure maximum compatibility with common browsers. We highly recommend the use of Font squirrel web font generator (http://www.fontsquirrel.com/tools/webfont-generator). This online tool helps to create fonts for @font-face by generating different font formats. All we need to do is to upload the original file (optionally adding same font variants bold, italic, or bold-italic), select the output formats, add some optimizations, and finally download the package. It is shown in the following screenshot: The following code explains the how to use this font: <!DOCTYPE html><html><head><title>My first @font-face demo</title><style type="text/css">@font-face {font-family: 'font_testregular';src: url('font_test-webfont.eot');src: url('font_test-webfont.eot?#iefix')format('embedded-opentype'),url('font_test-webfont.woff') format('woff'),url('font_test-webfont.ttf') format('truetype'),url('font_test-webfont.svg#font_testregular')format('svg');font-weight: normal;font-style: normal;} Normal font usage: h1 , p{font-family: 'font_testregular', Helvetica, Arial,sans-serif;}h1 {font-size: 45px;}p:first-letter {font-size: 100px;text-decoration: wave;}p {font-size: 18px;line-height: 27px;}</style> Font usage in canvas: <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script><script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],tx = canvas.getContext('2d');var t = 'font_testregular'; var c = 'red';var v =' sample text via canvas';ctx.font = '52px "'+t+'"';ctx.fillStyle = c;ctx.fillText(v, x, y);}</script></head><body onload="generate();"><h1>Header sample</h1><p>Sample text with lettrine effect</p><canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas></body></html> How it works... This recipe takes us through getting an original TTF file: Font download: When downloading a font (either free or commercial) we have to pay close attention to terms of use. Sometimes, you are not allowed to use these fonts on the web and you are only allowed to use them locally. Font creation: During this process, we have to pay attention to some directives. We have to create Glyphs for all the needed alphabets (upper case and lower case), numbers, and symbols to avoid font incompatibility. We have to take care of the spacing between glyphs and eventually, variations, and ligatures. A special creation process is reserved for right- to left-written languages. Font formats generation: Font squirrel is a very good online tool to generate the most common formats to handle the cross-browser compatibility. It is recommended that we optimize the font ourselves via expert mode. We have the possibility of fixing some issues during the font creation such as missing glyphs, X-height matching, and Glyph spacing. Font usage: We will go through the following font usage: Normal font usage: We used the same method as already adopted via font-family; web-safe fonts are also applied: h1 , p{font-family: 'font_testregular', Helvetica, Arial, sans-serif;} Font usage in canvas: The canvas is a HTML5 tag that renders dynamically, bitmap images via scripts Creating 2D shapes. In order to generate this image based on fonts, we will create the canvas tag at first. An alternative text will be displayed if canvas is not supported by the browser. <canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas> We will now use the jQuery library in order to generate the canvas output. An onload function will be initiated to create the content of this tag: <scriptsrc = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script> In the following function, we create a variable ctx which is a canvas occurrence of 2D context via canvas.getContext('2d'). We also define font-family using t as a variable, font-size, text to display using v as a variable, and color using c as a variable. These properties will be used as follows: <script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],ctx = canvas.getContext('2d');var t = 'font_testregular';var c = 'red' ;var v =' sample text via canvas'; This is for font-size and family. Here the font-size is 52px and the font-family is font_testregular: ctx.font = '52px "'+t+'"'; This is for color by fillstyle: ctx.fillStyle = c; Here we establish both text to display and axis coordinates where x is the horizontal position and y is vertical one. ctx.fillText(v, x, y); Using Web fonts In this recipe, you will learn how to use fonts hosted in distant servers for many reasons such as support services and special loading scripts. A lot of solutions are widely available on the web such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. In this task, we will be using Google fonts and its special JavaScript open source library, WebFont loader. Getting ready Please refer to the project WebFonts to get the full source code. How to do it... We will get through four steps: Let us configure the link tag: <link rel="stylesheet" id="linker" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> Then we will set up the WebFont loader: <script type="text/javascript">WebFontConfig = {google: {families: [ 'Tangerine' ]}};(function() {var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})();</script><style type="text/css">.wf-loading p#firstp {font-family: serif}.wf-inactive p#firstp {font-family: serif}.wf-active p#firstp {font-family: 'Tangerine', serif} Next we will write the import command: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Then we will cover font usage: h1 {font-size: 45px;font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";font-size: 40px;text-align: justify;color: blue;padding: 0 5px;}</style></head><body><div id="container"><h1>This H1 tag's font was used via @import command </h1><p>This font was imported via a Stylesheet link</p><p id="firstp">This font was created via WebFont loaderand managed by wf a script generated from webfonts.js.<br />loading time will be managed by CSS properties :<i>.wf-loading , .wf-inactive and .wf-active</i> </p></div></body></html> How it works... In this recipe and for educational purpose, we used following ways to embed the font in the source code (the link tag, the WebFont loader, and the import command). The link tag: A simple link tag to a style sheet is used referring to the address already created: <link rel="stylesheet" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> The WebFont loader: It is a JavaScript library developed by Google and Typekit. It grants advanced control options over the font loading process and exceptions. It lets you use multiple web font providers. In the following script, we can identify the font we used, Tangerine, and the link to predefined address of Google APIs with the world google: WebFontConfig = {google: { families: [ 'Inconsolata:bold' ] }}; We now will create wf which is an instance of an asynchronous JavaScript element. This instance is issued from Ajax Google API: var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})(); We can have control over fonts during and after loading by using specific class names. In this particular case, only the p tag with the ID firstp will be processed during and after font loading. During loading, we use the class .wf-loading. We can use a safe font (for example, Serif) and not the browser's default page until loading is complete as follows: .wf-loading p#firstp {font-family: serif;} After loading is complete, we will usually use the font that we were importing earlier. We can also add a safe font for older browsers: .wf-active p#firstp {font-family: 'Tangerine', serif;} Loading failure: In case we failed to load the font, we can specify a safe font to avoid falling in default browser's font: .wf-inactive p#firstp {font-family: serif;} The import command: It is the easiest way to link to the fonts: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Font usage: We will use the fonts as we did already via font-family property: h1 {font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";} There's more... The WebFont loader has the ability to embed fonts from mutiple WebFont providers. It has some predefined providers in the script such as Google, Typekit, Ascender, Fonts.com web fonts, and Fontdeck. For example, the following is the specific source code for Typekit and Ascender: WebFontConfig ={typekit: {id: 'TypekitId'}};WebFontConfig ={ascender: {key: 'AscenderKey',families: ['AscenderSans:bold,bolditalic,italic,regular']}}; For the font providers that are not listed above, a custom module can handle the loading of the specific style sheet: WebFontConfig = {custom: {families: ['OneFont', 'AnotherFont'],urls: ['http://myotherwebfontprovider.com/stylesheet1.css','http://yetanotherwebfontprovider.com/stylesheet2.css' ]}}; For more details and options of the WebFont loader script, you can visit the following link: https://developers.google.com/fonts/docs/webfont_loader To download this API you may access the following URL: https://github.com/typekit/webfontloader How to generate the link to the font? The URL used in every method to import the font in every method (the link tag, the WebFont loader, and the import command) is composed of the Google fonts API base url (http://fonts.googleapis.com/css) and the family parameter including one or more font names, ?family=Tangerine. Multiple fonts are separated with a pipe character (|) as follows: ?family=Tangerine|Inconsolata|Droid+Sans Optionally, we can add subsets or also specify a style for each font: Cantarell:italic|Droid+Serif:bold&subset=latin Browser-dependent output The Google fonts API serves a generated style sheet specific to the client, via the browser's request. The response is relative to the browser. For example, the output for Firefox will be: @font-face {font-family: 'Inconsolata';src: local('Inconsolata'),url('http://themes.googleusercontent.com/fonts/font?kit=J_eeEGgHN8Gk3Eud0dz8jw') format('truetype');} This method lowers the loading time because the generated style sheet is relative to client's browser. No multiformat font files are needed because Google API will generate it, automatically. Summary In this way, we have learned how to create different font formats, such as Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font, and how to use the different Web fonts such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. Resources for Article: Further resources on this subject: So, what is Markdown? [Article] Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 1
  • 3268
article-image-yui-2x-using-event-component
Packt
14 Dec 2010
7 min read
Save for later

YUI 2.X: Using Event Component

Packt
14 Dec 2010
7 min read
Yahoo! User Interface Library 2.x Cookbook Over 70 simple incredibly effective recipes for taking control of Yahoo! User Interface Library like a Pro Easily develop feature-rich internet applications to interact with the user using various built-in components of YUI library Simple and powerful recipes explaining how to use and implement YUI 2.x components Gain a thorough understanding of the YUI tools Plenty of example code to help you improve your coding and productivity with the YUI Library Hands-on solutions that take a practical approach to recipes In this article, you will learn how to use YUI to handle JavaScript events, what special events YUI has to improve the functionality of some JavaScript events, and how to write custom events for your own application. Using YUI to attach JavaScript event listeners When attaching events in JavaScript most browsers use the addEventListener function , but the developers of IE use a function called attachEvent. Legacy browsers do not support either function, but instead require developers to attach functions directly to element objects using the 'on' + eventName property (for example myElement.onclick=function(){...}). Additionally, the execution context of event callback functions varies depending on how the event listener is attached. The Event component normalizes all the cross-browser issues, fixes the execution context of the callback function, and provides additional event improvements. This recipe will show how to attach JavaScript event listeners, using YUI. How to do it... Attach a click event to an element: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.addListener(myElement, 'click', fnCallback); Attach a click event to an element by its ID: var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.addListener('myElementId','click',fnCallback) Attach a click event to several elements at once: var ids = ["myElementId1", "myElementId2", "myElementId3"]; var fnCallback = function(e) { var targ = YAHOO.util.Event.getTarget(e); alert(targ.id + " was clicked"); }; YAHOO.util.Event.addListener(ids, 'click', fnCallback); When attaching event listeners, you can provide an object as the optional fourth argument, to be passed through as the second argument to the callback function: var myElem = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e, obj) { alert(obj); }; var obj = "I was passed through."; YAHOO.util.Event.addListener(myElem,'click',fnCallback,obj); When attaching event listeners, you can change the execution context of the callback function to the fourth argument, by passing true as the optional fifth argument: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert('My execution context was changed.'); }; var ctx = { /* some object to be the execution context of callback */ }; YAHOO.util.Event.addListener( myElement, 'click', fnCallback, ctx, true); How it works... The addListener function wraps the native event handling functions, normalizing the cross- browser differences. When attaching events, YUI calls the correct browser specific function, or defaults to legacy event handlers. Before executing the callback function, the Event component must (in some browsers) find the event object and adjust the execution context of the callback function. The callback function is normalized by wrapping it in a closure function that executes when the browser event fires, thereby allowing YUI to correct the event, before actually executing the callback function. In legacy browsers, which can only have one callback function per event type, YUI attaches a callback function that iterates through the listeners attached by the addListener function There's more... The addListener function returns true if the event listener is attached successfully and false otherwise. If the element to listen on is not available when the addListener function is called, the function will poll the DOM and wait to attach the listener when the element becomes available. Additionally, the Event component also keeps a list of all events that it has attached. This list is maintained to simplify removing events listeners, and so that all event listeners can be removed when the end-user leaves the page. Find all events attached to an element: var listeners = YAHOO.util.Event.getListeners('myElementId'); for (var i=0,j=listeners.length; i<j; i+=1) { var listener = listeners[i]; alert(listener.type); // event type alert(listener.fn); // callback function alert(listener.obj); // second argument of callback alert(listener.adjust); // execution context } Find all events of a certain type attached to an element: // only click listeners var listeners = YAHOO.util.Event.getListeners('myElementId', 'click'); The garbage collector in JavaScript does not always do a good job cleaning up event handlers. When removing nodes from the DOM, remember to remove events you may have added as well. More on YAHOO.util.Event.addListener The addListener function has been aliased by the shorter on function: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e) { alert("myElementId was clicked"); }; YAHOO.util.Event.on(myElement, 'click', fnCallback); By passing an object in as the optional fifth argument of addListener, instead of a Boolean, you can change the execution context of the callback to that object, while still passing in an another object as the optional fourth argument: var myElement = YAHOO.util.Dom.get('myElementId'); var fnCallback = function(e, obj) { // this executes in the context of 'ctx' alert(obj); }; var obj = "I was passed through."; var ctx = { /* some object to be the execution context of callback */ }; YAHOO.util.Event.addListener( myElement,'click',fnCallback,obj, ctx); Lastly, there is an optional Boolean value that can be provided as the sixth argument of addListener, which causes the callback to execute in the event capture phase, instead of the event bubbling phase. You probably won't ever need to set this value to true, but if you want to learn more about JavaScript event phases see: http://www.quirksmode.org/js/events_order.html Event normalization functions The event object, provided as the first argument of the callback function, contains a variety of values that you may need to use (such as the target element, character code, etc.). YUI provides a collection of static functions that normalizes the cross-browser variations of these values. Before trying to use these properties, you should read this recipe, as it walks you through each of those functions. How to do it... Fetch the normalized target element of an event: var fnCallback = function(e) { var targetElement = YAHOO.util.Event.getTarget(e); alert(targetElement.id); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the character code of a key event (also known as the key code): var fnCallback = function(e) { var charCode = YAHOO.util.Event.getCharCode(e); alert(charCode); }; YAHOO.util.Event.on('myElementId', 'keypress', fnCallback); Fetch the x and y coordinates of a mouse event: var fnCallback = function(e) { var x = YAHOO.util.Event.getPageX(e); var y = YAHOO.util.Event.getPageY(e); alert("x-position=" + x + " and x-position= " + y); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch both the x and y coordinates of a mouse event, using: var fnCallback = function(e) { var point = YAHOO.util.Event.getXY(e); alert("x-position="+point[0]+" and x-position= "+point[1]); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the normalized related target element of an event: var fnCallback = function(e) { var targetElement = YAHOO.util.Event.getRelatedTarget(e); alert(targetElement.id); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Fetch the normalized time of an event: var fnCallback = function(e) { var time = YAHOO.util.Event.getTime(e); alert(time); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); Stop the default behavior, propagation (bubbling) of an event, or both: var fnCallback = function(e) { // prevents the event from bubbling up to ancestors YAHOO.util.Event.stopPropagation(e); // prevents the event's default YAHOO.util.Event.preventDefault(e); // prevents the event's default behavior and bubbling YAHOO.util.Event.stopEvent(e); }; YAHOO.util.Event.on('myElementId', 'click', fnCallback); How it works... All of these functions test to see if a value exists on the event for each cross-browser variation of a property. The functions then normalize those values and return them. The stopPropogation and preventDefault functions actually modify the equivalent cross-browser property of the event, and delegate the behavior to the browsers.
Read more
  • 0
  • 0
  • 3243

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 3235

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 3231
article-image-everything-package-concrete5
Packt
28 Mar 2011
10 min read
Save for later

Everything in a Package with concrete5

Packt
28 Mar 2011
10 min read
  concrete5 Beginner's Guide Create and customize your own website with the Concrete5 Beginner's Guide What's a package? Before we start creating our package, here are a few words about the functionality and purpose of packages: They can hold a single or several themes together You can include blocks which your theme needs You can check the requirements during the installation process in case your package depends on other blocks, configurations, and so on A package can be used to hook into events raised by concrete5 to execute custom code during different kind of actions You can create jobs, which run periodically to improve or check things in your website These are the most important things you can do with a package; some of it doesn't depend on packages, but is easier to handle if you use packages. It's up to you, but putting every extension in a package might even be useful if there's just a single element in it—why? You never have to worry where to extract the add-on. It always belongs in the packages directory An add-on wrapped in a package can be submitted to the concrete5 marketplace allowing you to earn money or make some people in the community happy by releasing your add-on for free Package structure We've already looked at different structures and you are probably already familiar with most of the directories in concrete5. Before we continue, here are a few words about the package structure, as it's essential that you understand its concept before we continue. A package is basically a complete concrete5 structure within one directory. All the directories are optional though. No need to create all of them, but you can create and use all of them within a single package. The directory concrete is a lot like a package as well; it's just located in its own directory and not within packages. Package controller Like the blocks we've created, the package has a controller as well. First of all, it is used to handle the installation process, but it's not limited to that. We can handle events and a few more things in the package controller; there's more about that later in this article. For now, we only need the controller to make sure the dashboard knows the package name and description. Time for action - creating the package controller Carry out the following steps: First, create a new directory named c5book in packages. Within that directory, create a file named controller.php and put the following content in it: <?php defined('C5_EXECUTE') or die(_("Access Denied.")); class c5bookPackage extends Package { protected $pkgHandle = 'c5book'; protected $appVersionRequired = '5.4.0'; protected $pkgVersion = '1.0'; public function getPackageDescription() { return t("Theme, Templates and Blocks from concrete5 for Beginner's"); } public function getPackageName() { return t("c5book"); } public function install() { $pkg = parent::install(); } } ?> You can create a file named icon.png 97 x 97 pixels with 4px rounded transparent corners. This is the official specification that you have to follow if you want to upload your add-on to the concrete5 marketplace. Once you've created the directory and the mandatory controller, you can go to your dashboard and click on Add Functionality. It looks a lot like a block but when you click on Install, the add-on is going to appear in the packages section. What just happened? The controller we created looks and works a lot like a block controller, which you should have seen and created already. However, let's go through all the elements of the package controller anyway, as it's important that you understand them: pkgHandle: A unique handle for your package. You'll need this when you access your package from code. appVersionRequired: The minimum version required to install the add-on. concrete5 will check that during the installation process. pkgVersion: The current version of the package. Make sure that you change the number when you release an update for a package; concrete5 has to know that it is installing an update and not a new version. getPackageDescription: Returns the description of your package. Use the t-function to keep it translatable. getPackageName: The same as above, just a bit shorter. install: You could remove this method in the controller above, since we're only calling its parent method and don't check anything else. It has no influence, but we'll need this method later when we put blocks in our package. It's just a skeleton for the next steps at the moment. Moving templates into package Remember the templates we've created? We placed them in the top level blocks directory. Worked like a charm but imagine what happens when you create a theme which also needs some block templates in order to make sure the blocks look like the theme? You'd have to copy files into the blocks directory as well as themes. This is exactly what we're trying to avoid with packages. It's rather easy with templates; they work almost anywhere. You just have to copy the folder slideshow from blocks to packages/c5book/blocks, as shown in the following screenshot: This step was even easier than most things we did before. We simply moved our templates into a different directory—nothing else. concrete5 looks for custom templates in different places like: concrete/blocks/<block-name>/templates blocks/<block-name>/templates packages/<package-name>/blocks/<block-name>/templates It doesn't matter where you put your templates, concrete5 will find them. Moving themes and blocks into the package Now that we've got our templates in the package, let's move the new blocks we've created into that package as well. The process is similar, but we have to call a method in the installer which installs our block. concrete5 does not automatically install blocks within packages. This means that we have to extend the empty install method shown earlier. Before we move the blocks into the package you should remove all blocks first. To do this, go to your dashboard, click on Add Functionality, click on the Edit button next to the block you want to move, and click on the Remove button in the next screen. We'll start with the jqzoom block. Please note; removing a block will of course, remove all the blocks you've added to your pages. Content will be lost if you move a block into a package after you've already used it. Time for action – moving jQZoom block into the package Carry out the following steps: As mentioned earlier, remove the jqzoom block from you website by using the Add Functionality section in your dashboard. Move the directory blocks/jqzoom to packages/c5book/blocks. Open the package controller we created a few pages earlier; you can find it at packages/c5book/controller.php. The following snippet shows only a part of the controller, the install method. The only thing you have to do is insert the highlighted line: public function install() { $pkg = parent::install(); // install blocks BlockType::installBlockTypeFromPackage('jqzoom', $pkg); } Save the file and go to your dashboard again. Select Add Functionality and locate the c5book package; click on Edit and then Uninstall Package and confirm the process on the next screen. Back on the Add Functionality screen, reinstall the package again, which will automatically install the block. What just happened? Besides moving files, we only had to add a single line of code to our existing package controller. This is necessary, because blocks within packages aren't automatically installed. When installing a package, only the install method of the controller is called, exactly the place where we hook into and install our block. The installBlockTypeFromPackage method takes two parameters: The block handle and the package object. However, this doesn't mean that packages behave like namespaces. What does this mean? A block is connected to a package. This is necessary in order to be able to uninstall the block when removing the package along with some other reasons. Even though there's a connection between the two objects, a block handle must be unique across all packages. You've seen that we had to remove and reinstall the package several times while we only moved a block. At this point, it probably looks a bit weird to do that, especially as you're going to lose some content on your website. However, when you're more familiar with the concrete5 framework, you'll usually know if you're going to need a package and make that decision before you start creating new blocks. If you're still in doubt, don't worry about it too much and create a package and not just a block. Using a package is usually the safest choice. Don't forget that all instances of a block will be removed from all pages when you uninstall the block from your website. Make sure your package structure doesn't change before you start adding content to your website. Time for action - moving the PDF block into the package Some blocks depend on helpers, files and libraries, which aren't in the block directory. The PDF generator block is such an example. It depends on a file found in the tools directory in the root of your concrete5 website. How do we include such a file in a package? Move the pdf directory from blocks to packages/c5book/blocks since we also want to include the block in the package. Locate the c5book directory within packages and create a new subdirectory named tools. Move generate_pdf.php from tools to packages/c5book/tools. Create another directory named libraries in packages/c5book. Move the mpdf50 from libraries to packages/c5book/libraries. As we've moved two objects, we have to make sure our code looks for them in the right place. Open packages/c5book/tools/generate.php and look for Loader::library at the beginning of the file. We have to add a second parameter to Loader::library, as shown here: <?php defined('C5_EXECUTE') or die(_("Access Denied.")); Loader::library('mpdf50/mpdf', 'c5book'); $fh = Loader::helper('file'); $header = <<<EOT <style type="text/css"> body { font-family: Helvetica, Arial; } h1 { border-bottom: 1px solid black; } </style> EOT; Next, open packages/c5book/blocks/pdf/view.php. We have to add the package handle as the second parameter to make sure the tool file is loaded from the package. <!--hidden_in_pdf_start--> <?php defined('C5_EXECUTE') or die(_('Access Denied.')); $nh = Loader::helper('navigation'); $url = Loader::helper('concrete/urls'); $toolsUrl = $url->getToolsURL('generate_pdf', 'c5book'); $toolsUrl .= '?p=' . rawurlencode($nh->getLinkToCollection($this- >c, true)); echo "<a href="{$toolsUrl}">PDF</a>"; ?> <!--hidden_in_pdf_end--> What just happened? In the preceding example, we put got a file in the tools directory and a PDF generator in the libraries directory, which we had to move as well. Even at the risk of saying the same thing several times: A package can contain any element of concrete5—libraries, tools, controllers, images, and so on. By putting all files in a single package directory, we can make sure that all files are installed at once, thus making sure all dependencies are met. Nothing has changed beside the small changes we've made to the commands, which access or load an element. A helper behaves like a helper, no matter where it's located. Have a go hero – move more add-ons We've moved two different blocks into our new package, along with the slideshow block templates. These aren't all blocks we've created so far. Try to move all add-ons we've created into our new package. If you need more information about that process, have a look at the following page: http://www.concrete5.org/documentation/developers/system/packages/
Read more
  • 0
  • 0
  • 3230

article-image-netbeans-platform-69-working-actions
Packt
10 Aug 2010
4 min read
Save for later

NetBeans Platform 6.9: Working with Actions

Packt
10 Aug 2010
4 min read
(For more resources on NetBeans, see here.) In Swing, an Action object provides an ActionListener for Action event handling, together with additional features, such as tool tips, icons, and the Action's activated state. One aim of Swing Actions is that they should be reusable, that is, can be invoked from a menu item as well as a related toolbar button and keyboard shortcut. The NetBeans Platform provides an Action framework enabling you to organize Actions declaratively. In many cases, you can simply reuse your existing Actions exactly as they were before you used the NetBeans Platform, once you have declared them. For more complex scenarios, you can make use of specific NetBeans Platform Action classes that offer the advantages of additional features, such as more complex displays in toolbars and support for context-sensitive help. Preparing to work with global actions Before you begin working with global Actions, let's make some changes to our application. It should be possible for the TaskEditorTopComponent to open for a specific task. You should therefore be able to pass a task into the TaskEditorTopComponent. Rather than the TaskEditorPanel creating a new task in its constructor, the task needs to be passed into it and made available to the TaskEditorTopComponent. On the other hand, it may make sense for a TaskEditorTopComponent to create a new task, rather than providing an existing task, which can then be made available for editing. Therefore, the TaskEditorTopComponent should provide two constructors. If a task is passed into the TaskEditorTopComponent, the TaskEditorTopComponent and the TaskEditorPanel are initialized. If no task is passed in, a new task is created and is made available for editing. Furthermore, it is currently only possible to edit a single task at a time. It would make sense to be able to work on several tasks at the same time in different editors. At the same time, you should make sure that the task is only opened once by the same editor. The TaskEditorTopComponent should therefore provide a method for creating new or finding existing editors. In addition, it would be useful if TaskEditorPanels were automatically closed for deleted tasks. Remove the logic for creating new tasks from the constructor of the TaskEditorPanel, along with the instance variable for storing the TaskManager, which is now redundant: public TaskEditorPanel() { initComponents(); this.pcs = new PropertyChangeSupport(this); } Introduce a new method to update a task: public void updateTask(Task task) { Task oldTask = this.task; this.task = task; this.pcs.firePropertyChange(PROP_TASK, oldTask, this.task); this.updateForm(); } Let us now turn to the TaskEditorTopComponent, which currently cannot be instantiated either with or without a task being provided. You now need to be able to pass a task for initializing the TaskEditorPanel. The new default constructor creates a new task with the support of a chained constructor, and passes this to the former constructor for the remaining initialization of the editor. In addition, it should now be able to return several instances of the TaskEditorTopComponent that are each responsible for a specific task. Hence, the class should be extended by a static method for creating new or finding existing instances. These instances are stored in a Map<Task, TaskEditorTopComponent> which is populated by the former constructor with newly created instances. The method checks whether the map for the given task already stores a responsible instance, and creates a new one if necessary. Additionally, this method registers a Listener on the TaskManager to close the relevant editor for deleting a task. As an instance is now responsible for a particular task this should be able to be queried, so we introduce another appropriate method. Consequently, the changes to the TaskEditorTopComponent looks as follows: private static Map<Task, TaskEditorTopComponent> tcByTask = new HashMap<Task, TaskEditorTopComponent>();public static TaskEditorTopComponent findInstance(Task task) { TaskEditorTopComponent tc = tcByTask.get(task); if (null == tc) { tc = new TaskEditorTopComponent(task); } if (null == taskMgr) { taskMgr = Lookup.getDefault().lookup(TaskManager.class); taskMgr.addPropertyChangeListener(newListenForRemovedNodes()); } return tc;}private class ListenForRemovedNodes implements PropertyChangeListener { public void propertyChange(PropertyChangeEvent arg0) { if (TaskManager.PROP_TASKLIST_REMOVE.equals (arg0.getPropertyName())) { Task task = (Task) arg0.getNewValue(); TaskEditorTopComponent tc = tcByTask.get(task); if (null != tc) { tc.close(); tcByTask.remove(task); } } }}private TaskEditorTopComponent() { this(Lookup.getDefault().lookup(TaskManager.class)); }private TaskEditorTopComponent(TaskManager taskMgr) { this((taskMgr != null) ? taskMgr.createTask() : null); }private TaskEditorTopComponent(Task task) { initComponents();// ... ((TaskEditorPanel) this.jPanel1).updateTask(task); this.ic.add(((TaskEditorPanel) this.jPanel1).task); this.associateLookup(new AbstractLookup(this.ic)); tcByTask.put(task, this); }public String getTaskId() { Task task = ((TaskEditorPanel) this.jPanel1).task; return (null != task) ? task.getId() : ""; } With that our preparations are complete and you can turn to the following discussion on Actions.
Read more
  • 0
  • 0
  • 3230
Modal Close icon
Modal Close icon