Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-five-common-questions-netjava-developers-learning-javascript-and-nodejs
Packt
20 Jun 2016
19 min read
Save for later

Five common questions for .NET/Java developers learning JavaScript and Node.js

Packt
20 Jun 2016
19 min read
In this article by Harry Cummings, author of the book Learning Node.js for .NET Developers For those with web development experience in .NET or Java, perhaps who've written some browser-based JavaScript in the past, it might not be obvious why anyone would want to take JavaScript beyond the browser and treat it as a general-purpose programming language. However, this is exactly what Node.js does. What's more, Node.js has been around for long enough now to have matured as a platform, and has sustained its impressive growth in popularity well beyond any period that could be attributed to initial hype over a new technology. In this introductory article, we'll look at why Node.js is a compelling technology worth learning more about, and address some of the common barriers and sources of confusion that developers encounter when learning Node.js and JavaScript. (For more resources related to this topic, see here.) Why use Node.js? The execution model of Node.js follows that of JavaScript in the browser. This might not be an obvious choice for server-side development. In fact, these two use cases do have something important in common. User interface code is naturally event-driven (for example, binding event handlers to button clicks). Node.js makes this a virtue by applying an event-driven approach to server-side programming. Stated formally, Node.js has a single-threaded, non-blocking, event-driven execution model. We'll define each of these terms. Non-blocking Put simply, Node.js recognizes that many programmes spend most of their time waiting for other things to happen. For example, slow I/O operations such as disk access and network requests. Node.js addresses this by making these operations non-blocking. This means that program execution can continue while they happen. This non-blocking approach is also called asynchronous programming. Of course, other platforms support this too (for example, C#'s async/await keywords and the Task Parallel Library). However, it is baked in to Node.js in a way that makes it simple and natural to use. Asynchronous API methods are all called in the same way: They all take a callback function to be invoked ("called back") when the execution completes. This function is invoked with an optional error parameter and the result of the operation. The consistency of calling non-blocking (asynchronous) API methods in Node.js carries through to its third-party libraries. This consistency makes it easy to build applications that are asynchronous throughout. Other JavaScript libraries, such as bluebird (http://bluebirdjs.com/docs/getting-started.html), allow callback-based APIs to be adapted to other asynchronous patterns. As an alternative to callbacks, you may choose to use Promises (similar to Tasks in .NET or Futures in Java) or coroutines (similar to async methods in C#) within your own codebase. This allows you to streamline your code while retaining the benefits of consistent asynchronous APIs in Node.js and its third-party libraries. Event-driven The event-driven nature of Node.js describes how operations are scheduled. In typical procedural programming environments, a program has some entry point that executes a set of commands until completion or enters a loop to perform work on each iteration. Node.js has a built-in event loop, which isn't directly exposed to the developer. It is the job of the event loop to decide which piece of code to execute next. Typically, this will be some callback function that is ready to run in response to some other event. For example, a filesystem operation may have completed, a timeout may have expired, or a new network request may have arrived. This built-in event loop simplifies asynchronous programming by providing a consistent approach and avoiding the need for applications to manage their own scheduling. Single-threaded The single-threaded nature of Node.js simply means that there is only one thread of execution in each process. Also, each piece of code is guaranteed to run to completion without being interrupted by other operations. This greatly simplifies development and makes programmes easier to reason about. It removes the possibility for a range of concurrency issues. For example, it is not necessary to synchronize/lock access to shared in-process state as it is in Java or .NET. A process can't deadlock itself or create race conditions within its own code. Single-threaded programming is only feasible if the thread never gets blocked waiting for long-running work to complete. Thus, this simplified programming model is made possible by the non-blocking nature of Node.js. Writing web applications The flagship use case for Node.js is in building websites and web APIs. These are inherently event-driven as most or all processing takes place in response to HTTP requests. Also, many websites do little computational heavy-lifting of their own. They tend to perform a lot of I/O operations, for example: Streaming requests from the client Talking to a database locally or over the network Pulling in data from remote APIs over the network Reading files from disk to send back to the client These factors make I/O operations a likely bottleneck for web applications. The non-blocking programming model of Node.js allows web applications to make the most of a single thread. As soon as any of these I/O operations starts, the thread is immediately free to pick up and start processing another request. Processing of each request continues via asynchronous callbacks when I/O operations complete. The processing thread is only kicking off and linking together these operations, never waiting for them to complete. This allows Node.js to handle a much higher rate of requests per thread than other runtime environments. How does Node.js scale? So, Node.js can handle many requests per thread, but what happens when we reach the limit of what one thread can handle? The answer is, of course, to use more threads! You can achieve this by starting multiple Node.js processes, typically, one for each web server CPU core. Note that this is still quite different to most Java or .NET web applications. These typically use a pool of threads much larger than the number of cores, because threads are expected to spend much of their time being blocked. The built-in Node.js cluster module makes it straightforward to spawn multiple Node.js processes. Tools such as PM2 (http://pm2.keymetrics.io/) and libraries such as throng (https://github.com/hunterloftis/throng) make it even easier to do so. This approach gives us the best of all worlds: Using multiple threads makes the most of our available CPU power By having a single thread per core, we also save overheads from the operating system context-switching between threads Since the processes are independent and don't share state directly, we retain the benefits of the single-threaded programming model discussed above By using long-running application processes (as with .NET or Java), we avoid the overhead of a process-per-request (as in PHP) Do I really have to use JavaScript? A lot of web developers new to Node.js will already have some experience of client-side JavaScript. This experience may not have been positive and might put you off using JavaScript elsewhere. You do not have to use JavaScript to work with Node.js. TypeScript (http://www.typescriptlang.org/) and other compile-to-JavaScript languages exist as alternatives. However, I do recommend learning Node.js with JavaScript first. It will give you a clearer understanding of Node.js and simplify your tool chain. Once you have a project or two under your belt, you'll be better placed to understand the pros and cons of other languages. In the meantime, you might be pleasantly surprised by the JavaScript development experience in Node.js. There are three broad categories of prior JavaScript development experience that can lead to people having a negative impression of it. These are as follows: Experience from the late 90s and early 00s, prior to MV* frameworks like Angular/Knockout/Backbone/Ember, maybe even prior to jQuery. This is the pioneer phase of client-side web development. More recent experience within the much more mature JavaScript ecosystem, perhaps as a full-stack developer writing server-side and client-side code. The complexity of some frameworks (such as the MV* frameworks listed earlier), or the sheer amount of choice in general, can be overwhelming. Limited experience with JavaScript itself, but exposure to some its most unusual characteristics. This may lead to a jarring sensation as a result of encountering the language in surprising or unintuitive ways. We'll address groups of people affected by each type of experience in turn. But note that individuals might identify with more than one of these groups. I'm happy to admit that I've been a member of all three in the past. The web pioneers These developers have been burned by worked with client-side JavaScript in the past. The browser is sometimes described as a hostile environment for code to execute in. A single execution context shared by all code allows for some particularly nasty gotchas. For example, third-party code on the same page can create and modify global objects. Node.js solves some of these issues on a fundamental level, and mitigates others where this isn't possible. It's JavaScript, so it's still the case that everything is mutable. But the Node.js module system narrows the global scope, so libraries are less likely to step on each other's toes. The conventions that Node.js establishes also make third-party libraries much more consistent. This makes the environment less hostile and more predictable. The web pioneers will also have had to cope with the APIs available to JavaScript in the browser. Although these have improved over time as browsers and standards have matured, the earlier days of web development were more like the Wild West. Quirks and inconsistencies in fundamental APIs caused a lot of hard work and frustration. The rise of jQuery is a testament to the difficulty of working with the Document Object Model of old. The continued popularity of jQuery indicates that people still prefer to avoid working with these APIs directly. Node.js addresses these issues quite thoroughly: First of all, by taking JavaScript out of the browser, the DOM and other APIs simply go away as they are no longer relevant. The new APIs that Node.js introduces are small, focused, and consistent. You no longer need to contend with inconsistencies between browsers. Everything you write will execute in the same JavaScript engine (V8). The overwhelmed full-stack developers Many of the frontend JavaScript frameworks provide a lot of power, but come with a great deal of complexity. For example, AngularJS has a steep learning curve, is quite opinionated about application structure, and has quite a few gotchas or things you just need to know. JavaScript itself is actually a language with a very small surface area. This provides a blank canvas for Node.js to provide a small number of consistent APIs (as described in the previous section). Although there's still plenty to learn in total, you can focus on just the things you need without getting tripped up by areas you're not yet familiar with. It's still true that there's a lot of choice and that this can be bewildering. For example, there are many competing test frameworks for JavaScript. The trend towards smaller, more composable packages in the Node.js ecosystem—while generally a good thing—can mean more research, more decisions, and fewer batteries-included frameworks that do everything out of the box. On balance though, this makes it easier to move at your own pace and understand everything that you're pulling into your application. The JavaScript dabblers It's easy to have a poor impression of JavaScript if you've only worked with it occasionally and never as the primary (or even secondary) language on a project. JavaScript doesn't do itself any favors here, with a few glaring gotchas that most people will encounter. For example, the fundamentally broken == equality operator and other symptoms of type coercion. Although these make a poor first impression, they aren't really indicative of the experience of working with JavaScript more regularly. As mentioned in the previous section, JavaScript itself is actually a very small language. Its simplicity limits the number of gotchas there can be. While there are a few things you "just need to know", it's a short list. This compares well against the languages that offer a constant stream of nasty surprises (for example, PHP's notoriously inconsistent built-in functions). What's more, successive ECMAScript standards have done a lot to clean up the JavaScript language. With Node.js, you get to take advantage of this, as all your code will run on the V8 engine, which implements the latest ES2015 standard. The other big that reason JavaScript can be jarring is more a matter of context than an inherent flaw. It looks superficially similar to the other languages with a C-like syntax, like Java and C#. The similarity to Java was intentional when JavaScript was created, but it's unfortunate. JavaScript's programming model is quite different to other object-oriented languages like Java or C#. This can be confusing or frustrating, when its syntax suggests that it may work in roughly the same way. This is especially true of object-oriented programming in JavaScript, as we'll discuss shortly. Once you've understood the fundamentals of JavaScript though, it's very easy to work productively with it. Working with JavaScript I'm not going to argue that JavaScript is the perfect language. But I do think many of the factors that lead to people having a bad impression of JavaScript are not down to the language itself. Importantly, many factors simply don't apply when you take JavaScript out of the browser environment. What's more, JavaScript has some really great extrinsic properties. These are things that aren't visible in the code, but have an effect on what it's like to work with the language. For example, JavaScript's interpreted nature makes it easy to set up automated tests to run continuously and provide near-instant feedback on your code changes. How does inheritance work in JavaScript? When introducing object-oriented programming, we usually talk about classes and inheritance. Java, C# and numerous other languages take a very similar approach to these concepts. JavaScript is quite unusual in that; it supports object-oriented programming without classes. It does this by applying the concept of inheritance directly to objects. Anything that is not one of JavaScript's built-in primitives (strings, number, null, and so on) is an object. Functions are just a special type of object that can be invoked with arguments. Arrays are a special type of object with list-like behavior. All objects (including these two special types) can have properties, which are just names with a value. You can think of JavaScript objects as a dictionary with string keys and object values. Programming without classes Let's say you have a chart with a very large number of data points. These points may be represented by objects that have some common behavior. In C# or Java, you might create a Point class. In JavaScript, you could implement points like this: function create Point(x, y) {     return {         x: x,         y: y,         isAboveDiagonal: function() {             return this.y > this.x;         }     }; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The createPoint function returns a new object each time it is called (the object is defined using JavaScript's object-literal notation, which is the basis for JSON). One problem with this approach is that the function assigned to the isAboveDiagonal property is redefined for each point on the graph, thus taking up more space in memory. You can address this using prototypal inheritance. Although JavaScript doesn't have classes, objects can inherit from other objects. Each object has a prototype. If the interpreter attempts to access a property on an object and that property doesn't exist, it will look for a property with the same name on the object's prototype instead. If the property doesn't exist there, it will check the prototype's prototype, and so on. The prototype chain will end with the built-in Object.prototype. You can implement point objects using a prototype as follows: var pointPrototype = {     isAboveDiagonal: function() {         return this.y > this.x;     } };   function createPoint(x, y) {     var newPoint = Object.create(pointPrototype);     newPoint.x = x;     newPoint.y = y;     return newPoint; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The Object.create method creates a new object with a specified prototype. The isAboveDiagonal method now only exists once in memory on the pointPrototype object. When the code tries to call isAboveDiagonal on an individual point object, it is not present, but it is found on the prototype instead. Note that the preceding example tells us something important about the behavior of the this keyword in JavaScript. It actually refers to the object that the current function was called on, rather than the object it was defined on. Creating objects with the 'new' keyword You can rewrite the previous code example in a more compact form using the new operator: function Point(x, y) {     this.x = x;     this.y = y; }   Point.prototype.isAboveDiagonal = function() {     return this.y > this.x; }   var myPoint = new Point(1, 2); By convention, functions have a property named prototype, which defaults to an empty object. Using the new operator with the Point function creates an object that inherits from Point.prototype and applies the Point function to the newly created object. Programming with classes Although JavaScript doesn't fundamentally have classes, ES2015 introduces a new class keyword. This makes it possible to implement shared behavior and inheritance in a way that may be more familiar compared to other object-oriented languages. The equivalent of the previous code example would look like the following: class Point {     constructor(x, y) {         this.x = x;         this.y = y;     }         isAboveDiagonal() {         return this.y > this.x;     } }   var myPoint = new Point(1, 2); Note that this really is equivalent to the previous example. The class keyword is just syntactic sugar for setting up the prototype-based inheritance already discussed. Once you know how to define objects and classes, you can start to structure the rest of your application. How do I structure Node.js applications? In C# and Java, the static structure of an application is defined by namespaces or packages (respectively) and static types. An application's run-time structure (that is, the set of objects created in memory) is typically bootstrapped using a dependency injection (DI) container. Examples of DI containers include NInject, Autofac and Unity in .NET, or Spring, Guice and Dagger in Java. These frameworks provide features like declarative configuration and autowiring of dependencies. Since JavaScript is a dynamic, interpreted language, it has no inherent static application structure. Indeed, in the browser, all the scripts loaded into a page run one after the other in the same global context. The Node.js module system allows you to structure your application into files and directories and provides a mechanism for importing functionality from one file into another. There are DI containers available for JavaScript, but they are less commonly used. It is more common to pass around dependencies explicitly. The Node.js module system and JavaScript's dynamic typing makes this approach more natural. You don't need to add a lot of fields and constructors/properties to set up dependencies. You can just wrap modules in an initialization function that takes dependencies as parameters. The following very simple example illustrates the Node.js module system, and shows how to inject dependencies via a factory function: We add the following code under /src/greeter.js: module.exports = function(writer) {     return {         greet: function() { writer.write('Hello World!'); }     } }; We add the following code under /src/main.js: var consoleWriter = {     write: function(string) { console.log(string); } }; var greeter = require('./greeter.js')(consoleWriter); greeter.greet(); In the Node.js module system, each file establishes a new module with its own global scope. Within this scope, Node.js provides the module object for the current module to export its functionality, and the require function for importing other modules. If you run the previous example (using node main.js), the Node.js runtime will load the greeter module as a result of the main module's call to the require function. The greeter module assigns a value to the exports property of the module object. This becomes the return value of the require call back in the main module. In this case, the greeter module exports a single object, which is a factory function that takes a dependency. Summary In this article, we have: Understood the Node.js programming model and its use in web applications Described how Node.js web applications can scale Discussed the suitability of JavaScript as a programming language Illustrated how object-oriented programming works in JavaScript Seen how dependency injection works with the Node.js module system Hopefully this article has given you some insight into why Node.js is a compelling technology, and made you better prepared to learn more about writing server-side applications with JavaScript and Node.js. Resources for Article: Further resources on this subject: Web Components [article] Implementing a Log-in screen using Ext JS [article] Arrays [article]
Read more
  • 0
  • 0
  • 10338

article-image-build-universal-javascript-app-part-2
John Oerter
30 Sep 2016
10 min read
Save for later

Build a Universal JavaScript App, Part 2

John Oerter
30 Sep 2016
10 min read
In this post series, we will walk through how to write a universal (or isomorphic) JavaScript app. Part 1 covered what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating our app, which are serving post data and adding React. In this second part of the series, we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. Let’s get started. Save on some of our very best React and Angular product from the 7th to 13th November - it's a perfect opportunity to get stuck into two tools that are truly redefining modern web development. Save 50% on featured eBooks and 80% on featured video courses here. Step 3: Client-side routing with React Router git checkout client-side-routing && npm install Now that we're pulling and displaying posts, let's add some navigation to individual pages for each post. To do this, we will turn our list of posts from step 2 (see the Part 1 post) into links that are always present on the page. Each post will live at http://localhost:3000/:postId/:postSlug. We can use React Router and a routes.js file to set up this structure: // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" component={Post} /> </Route> ) We've changed the render method in App.js to render links to posts instead of just <li> tags: // components/App.js import React from 'react' import { Link } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } ... render() { const posts = this.state.posts.map((post) => { const linkTo = `/${post.id}/${post.slug}`; return ( <li key={post.id}> <Link to={linkTo}>{post.title}</Link> </li> ) }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> {this.props.children} </div> ) } } export default App And, we'll add a Post.js component to render each post's content: // components/Post.js import React from 'react' class Post extends React.Component { constructor(props) { super(props) this.state = { title: '', content: '' } } fetchPost(id) { const request = new XMLHttpRequest() request.open('GET', '/api/post/' + id, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { const response = JSON.parse(request.response) this.setState({ title: response.title, content: response.content }); } } request.send(); } componentDidMount() { this.fetchPost(this.props.params.postId) } componentWillReceiveProps(nextProps) { this.fetchPost(nextProps.params.postId) } render() { return ( <div> <h3>{this.state.title}</h3> <p>{this.state.content}</p> </div> ) } } export default Post The componentDidMount() and componentWillReceiveProps() methods are important because they let us know when we should fetch a post from the server. componentDidMount() will handle the first time the Post.js component is rendered, and then componentWillReceiveProps() will take over as React Router handles rerendering the component with different props. Run npm build:client && node server.js again to build and run the app. You will now be able to go to http://localhost:3000 and navigate around to the different posts. However, if you try to refresh on a single post page, you will get something like Cannot GET /3/debugging-node-apps. That's because our Express server doesn't know how to handle that kind of route. React Router is handling it completely on the front end. Onward to server rendering! Step 4: Server rendering git checkout server-rendering && npm install Okay, now we're finally getting to the good stuff. In this step, we'll use React Router to help our server take application requests and render the appropriate markup. To do that, we need to also build a server bundle like we build a client bundle, so that the server can understand JSX. Therefore, we've added the below webpack.server.config.js: // webpack.server.config.js var fs = require('fs') var path = require('path') module.exports = { entry: path.resolve(__dirname, 'server.js'), output: { filename: 'server.bundle.js' }, target: 'node', // keep node_module paths out of the bundle externals: fs.readdirSync(path.resolve(__dirname, 'node_modules')).concat([ 'react-dom/server', 'react/addons', ]).reduce(function (ext, mod) { ext[mod] = 'commonjs ' + mod return ext }, {}), node: { __filename: true, __dirname: true }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } We've also added the following code to server.js: // server.js import React from 'react' import { renderToString } from 'react-dom/server' import { match, RouterContext } from 'react-router' import routes from './components/routes' const app = express() ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const appHtml = renderToString(<RouterContext {...props} />) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) function renderPage(appHtml) { return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Universal Blog</title> </head> <body> <div id="app">${appHtml}</div> <script src="/bundle.js"></script> </body> </html> ` } ... Using React Router's match function, the server can find the appropriate requested route, renderToString, and send the markup down the wire. Run npm start to build the client and server bundles and start the app. Fantastic right? We're not done yet. Even though the markup is being generated on the server, we're still fetching all the data client side. Go ahead and click through the posts with your dev tools open, and you'll see the requests. It would be far better to load the data while we're rendering the markup instead of having to request it separately on the client. Since server rendering and universal apps are still bleeding-edge, there aren't really any established best practices for data loading. If you're using some kind of Flux implementation, there may be some specific guidance. But for this use case, we will simply grab all the posts and feed them through our app. In order to this, we first need to do some refactoring on our current architecture. Step 5: Data Flow Refactor git checkout data-flow-refactor && npm install It's a little weird how each post page has to make a request to the server for its content, even though the App component already has all the posts in its state. A better solution would be to have an App simply pass the appropriate content down to the Post component. // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" /> </Route> ) In our routes.js, we've made the Post route a componentless route. It's still a child of the App route, but now has to completely rely on the App component for rendering. Below are the changes to App.js: // components/App.js ... render() {    const posts = this.state.posts.map((post) => {      const linkTo = `/${post.id}/${post.slug}`;      return (        <li key={post.id}>          <Link to={linkTo}>{post.title}</Link>        </li>      )    })    const { postId, postName } = this.props.params;    let postTitle, postContent    if (postId && postName) {      const post = this.state.posts.find(p => p.id == postId)      postTitle = post.title      postContent = post.content    }    return (      <div>        <h3>Posts</h3>        <ul>          {posts}        </ul>        {postTitle && postContent ? (          <Post title={postTitle} content={postContent} />        ) : (          <h1>Welcome to the Universal Blog!</h1>        )}      </div>    ) } } export default App If we are on a post page, then props.params.postId and props.params.postName will both be defined and we can use them to grab the desired post and pass the data on to the Post component to be rendered. If those properties are not defined, then we're on the home page and can simply render a greeting. Now, our Post.js component can be a simple stateless functional component that simply renders its properties. // components/Post.js import React from 'react' const Post = ({title, content}) => ( <div> <h3>{title}</h3> <p>{content}</p> </div>) export default Post With that refactoring complete, we're ready to implement data loading. Step 6: Data Loading git checkout data-loading && npm install For this final step, we just need to make two small changes in server.js and App.js: // server.js ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const routerContextWithData = ( <RouterContext {...props} createElement={(Component, props) => { return <Component posts={posts} {...props} /> }} /> ) const appHtml = renderToString(routerContextWithData) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) ... // components/App.js import React from 'react' import Post from './Post' import { Link, IndexLink } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: props.posts || [] } } ... In server.js, we're changing how the RouterContext creates elements by overwriting its createElement function and passing in our data as additional props. These props will get passed to any component that is matched by the route, which in this case will be our App component. Then, when the App component is initialized, it sets its posts state property to what it got from props or an empty array. That's it! Run npm start one last time, and cruise through your app. You can even disable JavaScript, and the app will automatically degrade to requesting whole pages. Thanks for reading! About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9870

article-image-displaying-data-grids-ext-js
Packt
15 Oct 2010
9 min read
Save for later

Displaying Data with Grids in Ext JS

Packt
15 Oct 2010
9 min read
What is a grid? Ext JS grids are similar to a spreadsheet; there are two main parts to each spreadsheet: Columns Rows Here our columns are Title, Released, Genre, and Price. Each of the rows contains movies such as The Big Lebowski, Super Troopers, and so on. The rows are really our data; each row in the grid represents a record of data held in a data store. GridPanel is databound Like many Ext JS Components, the GridPanel class is bound to a data store which provides it with the data shown in the user interface. So the first step in creating our GridPanel is creating and loading a store. The data store in Ext JS gives us a consistent way of reading different data formats such as XML and JSON, and using this data in a consistent way throughout all of the Ext JS widgets. Regardless of whether this data is originally provided in JSON, XML, an array, or even a custom data type of our own, it's all accessed in the same way thanks to the consistency of the data store and how it uses a separate reader class which interprets the raw data. Instead of using a pre-configured class, we will explicitly define the classes used to define and load a store. The record definition We first need to define the fields which a Record contains. A Record is a single row of data and we need to define the field names, which we want to be read into our store. We define the data type for each field, and, if necessary, we define how to convert from the raw data into the field's desired data type. What we will be creating is a new class. We actually create a constructor which will be used by Ext JS to create records for the store. As the 'create' method creates a constructor, we reference the resulting function with a capitalized variable name, as per standard CamelCase syntax: var Movie = Ext.data.Record.create([ 'id', 'coverthumb', 'title', 'director', 'runtime', {name: 'released', type: 'date', dateFormat: 'Y-m-d'}, 'genre', 'tagline', {name: 'price', type: 'float'}, {name: 'available', type: 'bool'} ]); Each element in the passed array defines a field within the record. If an item is just a string, the string is used as the name of the field, and no data conversion is performed; the field's value will be whatever the Reader object (which we will learn more about soon) finds in the raw data. If data conversion is required, then a field definition in the form of an object literal instead of a string may contain the following config options: Name: The name by which the field will be referenced. type: The data type in which the raw data item will be converted to when stored in the record. Values may be 'int', 'float', 'string', 'date', or 'bool'. dateFormat: If the type of data to be held in the field is a date type, then we need to specify a format string as used by the Date.parseDate function. Defining the data type can help to alleviate future problems, instead of having to deal with all string type data defining the data type, and lets us work with actual dates, Boolean values, and numbers. The following is a list of the built in data types:   Field type Description Information string String data   int Number Uses JavaScript's parseInt function float Floating point number Uses JavaScript's parseFloat function boolean True/False data   date Date data dateFormat config required to interpret incoming data. Now that the first step has been completed, and we have a simple Record definition in place, we can move on to the next step of configuring a Reader that is able to understand the raw data. The Reader A store may accept raw data in several different formats. Raw data rows may be in the form of a simple array of values, or an object with named properties referencing the values, or even an XML element in which the values are child nodes. We need to configure a store with a Reader object which knows how to extract data from the raw data that we are dealing with. There are three built in Reader classes in Ext JS. ArrayReader The ArrayReader class can create a record from an array of values. By default, values are read out of the raw array into a record's fields in the order that the fields were declared in the record definition we created. If fields are not declared in the same order that the values occur in the rows, the field's definition may contain a mapping config which specifies the array index from which we retrieve the field's value. JsonReader This JSONReader class is the most commonly used, and can create a record from raw JSON by decoding and reading the object's properties. By default, the field's name is used as the property name from which it takes the field's value. If a different property name is required, the field's definition may contain a mapping config which specifies the name of the property from which to retrieve the field's value. XmlReader An XMLReader class can create a record from an XML element by accessing child nodes of the element. By default, the field's name is used as the XPath mapping (not unlike HTML) from which to take the field's value. If a different mapping is required, the field's definition may contain a mapping config which specifies the mapping from which to retrieve the field's value. Loading our data store In our first attempt, we are going to create a grid that uses simple local array data stored in a JavaScript variable. The data we're using below in the movieData variable is taken from a very small movie database of some of my favorite movies. The data store needs two things: the data itself, and a description of the data—or what could be thought of as the fields. A reader will be used to read the data from the array, and this is where we define the fields of data contained in our array. The following code should be placed before the Ext JS OnReady function: var movieData = [ [ 1, "Office Space", "Mike Judge", 89, "1999-02-19", 1, "Work Sucks", "19.95", 1 ],[ 3, "Super Troopers", "Jay Chandrasekhar", 100, "2002-02-15", 1, "Altered State Police", "14.95", 1 ] //...more rows of data removed for readability...// ]; var store = new Ext.data.Store({ data: movieData, , reader: new Ext.data.ArrayReader({idIndex: 0}, Movie) }); If we view this code in a browser we would not see anything—that's because a data store is just a way of loading and keeping track of our data. The web browser's memory has our data in it. Now we need to configure the grid to display our data to the user. Displaying structured data with a GridPanel Displaying data in a grid requires several Ext JS classes to cooperate: A Store: A data store is a client-side analogue of a database table. It encapsulates a set of records, each of which contains a defined set of fields. A Record definition: This defines the fields (or "columns" in database terminology) which make up each record in the Store. Field name and datatype are defined here. A Reader which uses a Record definition to extract field values from a raw data object to create the records for a Store. A ColumnModel which specifies the details of each column, including the column header to display, and the name of the record field to be displayed for each column. A GridPanel: A panel subclass which provides the controller logic to bind the above classes together to generate the grid's user interface. If we were to display the data just as the store sees it now, we would end up with something like this: Now that is ugly—here's a breakdown of what's happening: The Released date has been type set properly as a date, and interpreted from the string value in our data. It's provided in a native JavaScript date format—luckily Ext JS has ways to make this look pretty. The Price column has been type set as a floating point number. Note that there is no need to specify the decimal precision. The Avail column has been interpreted as an actual Boolean value, even if the raw data was not an actual Boolean value. As you can see, it's quite useful to specify the type of data that is being read, and apply any special options that are needed so that we don't have to deal with converting data elsewhere in our code. Before we move on to displaying the data in our grid, we should take a look at how the convert config works, as it can come in quite useful. Converting data read into the store If we need to, we can convert data as it comes into the store, massage it, remove any quirky parts, or create new fields all together. This should not be used as a way to change the display of data; that part will be handled elsewhere. A common task might be to remove possible errors in the data when we load it, making sure it's in a consistent format for later actions. This can be done using a convert function, which is defined in the 'convert' config by providing a function, or reference to a function. In this case we are going to create a new field by using the data from another field and combining it with a few standard strings. var store = new Ext.data.Store({ data: movieData, reader: new Ext.data.ArrayReader({id:'id'}, [ 'id', 'title', 'director', {name: 'released', type: 'date', dateFormat: 'Y-m-d'}, 'genre', 'tagline', 'price', 'available', {name:'coverthumb',convert:function(v, rawData){ return 'images/'+rawData[0]+'m.jpg'; }} ]) }); This convert function when used in this manner will create a new field of data that looks like this: 'images/5m.jpg' We will use this new field of data shortly, so let's get a grid up and running.
Read more
  • 0
  • 0
  • 9677

article-image-build-universal-javascript-app-part-1
John Oerter
27 Sep 2016
8 min read
Save for later

Build a Universal JavaScript App, Part 1

John Oerter
27 Sep 2016
8 min read
In this two part post series, we will walk through how to write a universal (or isomorphic) JavaScript app. This first part will cover what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating the app, which are serving post data and adding React. In Part 2 of this series we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. What is a Universal JavaScript app? To put it simply, a universal JavaScript app is an application that can render itself on the client and the server. It combines the features of traditional server-side MVC frameworks (Rails, ASP.NET MVC, and Spring MVC), where markup is generated on the server and sent to the client, with the features of SPA frameworks (Angular, Ember, Backbone, and so on), where the server is only responsible for the data and the client generates markup. Universal or Isomorphic? There has been some debate in the JavaScript community over the terms "universal" and "isomorphic" to describe apps that can run on the client and server. I personally prefer the term "universal," simply because it's a more familiar word and makes the concept easier to understand. If you're interested in this discussion, you can read the below articles: Isomorphic JavaScript: The Future of Web Apps by Spike Brehm popularizes the term "isomorphic". Universal JavaScript by Michael Jackson puts forth the term "universal" as a better alternative. Is "Isomorphic JavaScript" a good term? by Dr. Axel Rauschmayer says that maybe certain applications should be called isomorphic and others should be called universal. What are the advantages? Switching between one language on the server and JavaScript on the client can harm your productivity. JavaScript is a unique language that, for better or worse, behaves in a very different way from most server-side languages. Writing universal JavaScript apps allows you to simplify your workflow and immerse yourself in JavaScript. If you're writing a web application today, chances are that you're writing a lot of JavaScript anyway. Why not dive in? Node continues to improve with better performance and more features thanks to V8 and it's well run community, and npm is a fantastic package manager with thousands of quality packages available. There is tremendous brain power being devoted to JavaScript right now. Take advantage of it! On top of that, maintainability of a universal app is better because it allows more code reuse. How many times have you implemented the same validation logic in your server and front end code? Or rewritten utility functions? With some careful architecture and decoupling, you can write and test code once that will work on the server and client. Performance SPAs are great because they allow the user to navigate applications without waiting for full pages to be sent down from the server. The cost, however, is longer wait times for the application to be initialized on the first load because the browser needs to receive all the assets needed to run the full app up front. What if there are rarely visited areas in your app? Why should every client have to wait for the logic and assets needed for those areas? This was the problem Netflix solved using universal JavaScript. MVC apps have the inverse problem. Each page only has the markup, assets, and JavaScript needed for that page, but the trade-off is round trips to the server for every page. SEO Another disadvantage of SPAs is their weakness on SEO. Although web crawlers are getting better at understanding JavaScript, a site generated on the server will always be superior. With universal JavaScript, any public-facing page on your site can be easily requested and indexed by search engines. Building an Example Universal JavaScript App Now that we've gained some background on universal JavaScript apps, let's walk through building a very simple blog website as an example. Here are the tools we'll use: Express React React Router Babel Webpack I've chosen these tools because of their popularity and ease of accomplishing our task. I won't be covering how to use Redux or other Flux implementations because, while useful in a production application, they are not necessary for demoing how to create a universal app. To keep things simple, we will forgo a database and just store our data in a flat file. We'll also keep the Webpack shenanigans to a minimum and only do what is necessary to transpile and bundle our code. You can grab the code for this walkthrough at here, and follow along. There are branches for each step along the way. Be sure to run npm install for each step. Let's get started! Step 1: Serving Post Data git checkout serving-post-data && npm install We're going to start off slow, and simply set up the data we want to serve. Our posts are stored in the posts.js file, and we just have a simple Express server in server.js that takes requests at /api/post/{id}. Snippets of these files are below. // posts.js module.exports = [ ... { id: 2, title: 'Expert Node', slug: 'expert-node', content: 'Street art 8-bit photo booth, aesthetic kickstarter organic raw denim hoodie non kale chips pour-over occaecat. Banjo non ea, enim assumenda forage excepteur typewriter dolore ullamco. Pickled meggings dreamcatcher ugh, church-key brooklyn portland freegan normcore meditation tacos aute chicharrones skateboard polaroid. Delectus affogato assumenda heirloom sed, do squid aute voluptate sartorial. Roof party drinking vinegar franzen mixtape meditation asymmetrical. Yuccie flexitarian est accusamus, yr 3 wolf moon aliqua mumblecore waistcoat freegan shabby chic. Irure 90's commodo, letterpress nostrud echo park cray assumenda stumptown lumbersexual magna microdosing slow-carb dreamcatcher bicycle rights. Scenester sartorial duis, pop-up etsy sed man bun art party bicycle rights delectus fixie enim. Master cleanse esse exercitation, twee pariatur venmo eu sed ethical. Plaid freegan chambray, man braid aesthetic swag exercitation godard schlitz. Esse placeat VHS knausgaard fashion axe cred. In cray selvage, waistcoat 8-bit excepteur duis schlitz. Before they sold out bicycle rights fixie excepteur, drinking vinegar normcore laboris 90's cliche aliqua 8-bit hoodie post-ironic. Seitan tattooed thundercats, kinfolk consectetur etsy veniam tofu enim pour-over narwhal hammock plaid.' }, ... ] // server.js ... app.get('/api/post/:id?', (req, res) => { const id = req.params.id if (!id) { res.send(posts) } else { const post = posts.find(p => p.id == id); if (post) res.send(post) else res.status(404).send('Not Found') } }) ... You can start the server by running node server.js, and then request all posts by going to localhost:3000/api/post or a single post by id such as localhost:3000/api/post/0. Great! Let's move on. Step 2: Add React git checkout add-react && npm install Now that we have the data exposed via a simple web service, let's use React to render a list of posts on the page. Before we get there, however, we need to set up webpack to transpile and bundle our code. Below is our simple webpack.config.js to do this: // webpack.config.js var webpack = require('webpack') module.exports = { entry: './index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } All we're doing is bundling our code with index.js as an entry point and writing the bundle to a public folder that will be served by Express. Speaking of index.js, here it is: // index.js import React from 'react' import { render } from 'react-dom' import App from './components/app' render ( <App />, document.getElementById('app') ) And finally, we have App.js: // components/App.js import React from 'react' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } componentDidMount() { const request = new XMLHttpRequest() request.open('GET', allPostsUrl, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { this.setState({ posts: JSON.parse(request.response) }); } } request.send(); } render() { const posts = this.state.posts.map((post) => { return <li key={post.id}>{post.title}</li> }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> </div> ) } } export default App Once the App component is mounted, it sends a request for the posts, and renders them as a list. To see this step in action, build the webpack bundle first with npm run build:client. Then, you can run node server.js just like before. http://localhost:3000 will now display a list of our posts. Conclusion Now that React has been added, take a look at Part 2 where we cover client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9257

article-image-learning-how-manage-records-visualforc
Packt
14 Oct 2016
7 min read
Save for later

Learning How to Manage Records in Visualforc

Packt
14 Oct 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Learning How to Manage Records in Visualforce [Article]
Read more
  • 0
  • 0
  • 9223

article-image-ibm-lotus-domino-exploring-view-options-web
Packt
11 May 2011
8 min read
Save for later

IBM Lotus Domino: exploring view options for the web

Packt
11 May 2011
8 min read
Views are important to most Domino applications. They provide the primary means by which documents are located and retrieved. But working with views on the Web is often more complicated or less satisfactory than using views with the Notes client. Several classic view options are available for web applications, all of which have draw-backs and implementation issues. A specific view can be displayed on the Web in several different ways. So it is helpful to consider view attributes that influence design choices, in particular: View content View structure How a view is translated for the Web How a view looks in a browser Whether or not a view template is used View performance Document hierarchy In terms of content, a view contains: Data only Data and HTML tags In terms of structure, views are: Uncategorized Categorized In terms of the techniques used by Domino to translate views for the Web, there are four basic methods: Domino-generated HTML (the default) Developer-generated HTML (the view contains data and HTML tags) View Applet (used with data only views) XML (the view is transmitted to the browser as an XML document) The first three of these methods are easier to implement. Two options on the Advanced tab of View Properties control which of these three methods is used: Treat view contents as HTML Use applet in the browser If neither option is checked, then Domino translates the view into an HTML table and then sends the page to the browser. If Treat view contents as HTML is checked, then Domino sends the view to the browser as is, assuming that the developer has encoded HTML table tags in the view. If Use applet in the browser is checked, then Domino uses the Java View Applet to display the view. (As mentioned previously, the Java Applets can be slow to load, and they do require a locally installed JVM (Java Virtual Machine)). Using XML to display views in a browser is a more complicated proposition, and we will not deal with it here. Pursue this and other XML-related topics in Designer Help or on the Web. Here is a starting point: http://www.ibm.com/developerworks/xml/ In terms of how a view looks when displayed in a browser, two alternatives can be used: Native styling with Designer Styling with Cascading Style Sheets In terms of whether or not a view template is used, there are three choices: A view template is not used The default view template is used A view template created for a specific view is used Finally, view performance can be an issue for views with many: Documents Columns Column formulas Column sorting options Each view is indexed and refreshed according to a setting on the Advanced tab of View Properties. By default, view indices are set to refresh automatically when documents are added or deleted. If the re-indexing process takes longer, then application response time can suffer. In general, smaller and simpler views with fewer column formulas perform better than long, complicated and computationally intensive views. The topics in this section deal with designing views for the Web. The first few topics review the standard options for displaying views. Later topics offer suggestions about improving view look and feel. Understand view Action buttons As you work with views on the Web, keep in mind that Action buttons are always placed at the top of the page regardless of how the view is displayed on the Web (standard view, view contents as HTML) and regardless of whether or not a view template is used. Unless the Action Bar is displayed with the Java applet, Action buttons are rendered in a basic HTML table; a horizontal rule separates the Action buttons from the rest of the form. Bear in mind that the Action buttons are functionally connected to but stylistically independent of the view and view template design elements that display lower on the form. Use Domino-generated default views When you look at a view on the Web, the view consists only of column headings and data rows. Everything else on the page (below any Action buttons) is contained on a view template form. You can create view templates in your design, or you can let Domino provide a default form. If Domino supplies the view template, the rendered page is fairly basic. Below the Action buttons and the horizontal rule, standard navigational hotspots are displayed; these navigational hotspots are repeated below the view. Expand and Collapse hotspots are included to support categorized views and views that include documents in response hierarchies. The view title displays below the top set of navigational hotspots, and then the view itself appears. If you supply a view template for a view, you must design the navigational hotspots, view title, and other design elements that may be required. View contents are rendered as an HTML table with columns that expand or contract depending upon the width of cell contents. If view columns enable sorting, then sorting arrows appear to the right of column headings. Here is an example of how Domino displays a view by default on the Web: In this example, clicking the blue underscored values in the left-most Last Name column opens the corresponding documents. By default, values in the left-most column are rendered as URL links, but any other column—or several columns—can serve this purpose. To change which column values are clickable, enable or disable the Show values in this column as links option on the Advanced tab of Column Properties: Typically a title, subject, or another unique document attribute is enabled as the link. Out of the box default views are a good choice for rapid prototyping or for one-time needs where look-and-feel are less important. Beyond designing the views, nothing else is required. Domino merges the views with HTML tags and a little JavaScript to produce fully functional pages. On the down side, what you see is what you get. Default views are stylistically uninspiring, and there is not a lot that can be done with them beyond some modest Designer-applied styling. Many Designer-applied styles, such as column width, are not translated to the Web. Still, some visual improvements can be made. In this example, the font characteristics are modified, and an alternate row background color is added: Include HTML tags to enhance views Some additional styling and behavior can be coded into standard views using HTML tags and CSS rules. Here is how this is done: In this example, <font> tags surround the column Title. Note the square brackets that identify the tags as HTML: Tags can also be inserted into column value formulas: "[<font color='darkblue'>]" + ContactLast + "[</font>]" When viewed with a browser, the new colors are displayed as expected. But when the view is opened in Notes, it looks like this: (Move the mouse over the image to enlarge it.) The example illustrates how to code the additional tags, but frankly the same effects can be achieved using Designer-applied formatting, so there is no real gain here. The view takes longer to code and the final result is not very reader-friendly when viewed with the Notes client. That being said, there still may be occasions when you want to add HTML tags to achieve a particular result. Here is a somewhat more complicated application of the same technique. This next line of code is added to the Title of a column. Note the use of <sup> and <font> tags. These tags apply only to the message See footnote 1: Last Name[<sup><font color='red'>]See footnote 1[</font></sup>] The result achieves the desired effect: More challenging is styling values in view columns. You do not have access to the <td> or <font> tags that Domino inserts into the page to define table cell contents. But you can add <span> tags around a column value, and then use CSS rules to style the span. Here is what the column formula might look like: "[<span class='column1'>]" + ContactLast + "[</span>]" Here is the CSS rule for the column1 class: .column1 { background-color: #EEE; cursor: default; display: block; font-weight: bold; text-decoration: none; width: 100%; } These declarations change the background color of the cell to a light gray and the pointer to the browser default. The display and width declarations force the span to occupy the width of the table cell. The text underscoring (for the link) is removed and the text is made bold. Without the CSS rule, the view displays as expected: With the CSS rule applied, a different look for the first column is achieved:
Read more
  • 0
  • 0
  • 9010
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891

article-image-showing-cached-content-first-then-networks
Packt
03 Aug 2016
9 min read
Save for later

Showing cached content first then networks

Packt
03 Aug 2016
9 min read
In this article by Sean Amarasinghe, the author of the book, Service Worker Development Cookbook, we are going to look at the methods that enable us to control cached content by creating a performance art event viewer web app. If you are a regular visitor to a certain website, chances are that you may be loading most of the resources, like CSS and JavaScript files, from your cache, rather than from the server itself. This saves us necessary bandwidth for the server, as well as requests over the network. Having the control over which content we deliver from the cache and server is a great advantage. Server workers provide us with this powerful feature by having programmatic control over the content. (For more resources related to this topic, see here.) Getting ready To get started with service workers, you will need to have the service worker experiment feature turned on in your browser settings. Service workers only run across HTTPS. How to do it... Follow these instructions to set up your file structure. Alternatively, you can download the files from the following location: https://github.com/szaranger/szaranger.github.io/tree/master/service-workers/03/02/ First, we must create an index.html file as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Cache First, then Network</title> <link rel="stylesheet" href="style.css"> </head> <body> <section id="events"> <h1><span class="nyc">NYC</span> Events TONIGHT</h1> <aside> <img src="hypecal.png" /> <h2>Source</h2> <section> <h3>Network</h3> <input type="checkbox" name="network" id="network- disabled-checkbox"> <label for="network">Disabled</label><br /> <h3>Cache</h3> <input type="checkbox" name="cache" id="cache- disabled-checkbox"> <label for="cache">Disabled</label><br /> </section> <h2>Delay</h2> <section> <h3>Network</h3> <input type="text" name="network-delay" id="network-delay" value="400" /> ms <h3>Cache</h3> <input type="text" name="cache-delay" id="cache- delay" value="1000" /> ms </section> <input type="button" id="fetch-btn" value="FETCH" /> </aside> <section class="data connection"> <table> <tr> <td><strong>Network</strong></td> <td><output id='network-status'></output></td> </tr> <tr> <td><strong>Cache</strong></td> <td><output id='cache-status'></output><td> </tr> </table> </section> <section class="data detail"> <output id="data"></output> </section> <script src="index.js"></script> </body> </html> Create a CSS file called style.css in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/style.css Create a JavaScript file called index.js in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/index.js Open up a browser and go to index.html. First we are requesting data from the network with the cache enabled. Click on the Fetch button. If you click fetch again, the data has been retrieved first from cache, and then from the network, so you see duplicate data. (See the last line is same as the first.) Now we are going to select the Disabled checkbox under the Network label, and click the Fetch button again, in order to fetch data only from the cache. Select the Disabled checkbox under the Network label, as well as the Cache label, and click the Fetch button again. How it works... In the index.js file, we are setting a page specific name for the cache, as the caches are per origin based, and no other page should use the same cache name: var CACHE_NAME = 'cache-and-then-network'; If you inspect the Resources tab of the development tools, you will find the cache inside the Cache Storage tab. If we have already fetched network data, we don't want the cache fetch to complete and overwrite the data that we just got from the network. We use the networkDataReceived flag to let the cache fetch callbacks to know if a network fetch has already completed: var networkDataReceived = false; We are storing elapsed time for both network and cache in two variables: var networkFetchStartTime; var cacheFetchStartTime; The source URL for example is pointing to a file location in GitHub via RawGit: var SOURCE_URL = 'https://cdn.rawgit.com/szaranger/ szaranger.github.io/master/service-workers/03/02/events'; If you want to set up your own source URL, you can easily do so by creating a gist, or a repository, in GitHub, and creating a file with your data in JSON format (you don't need the .json extension). Once you've done that, copy the URL of the file, head over to https://rawgit.com, and paste the link there to obtain another link with content type header as shown in the following screenshot: Between the time we press the Fetch button, and the completion of receiving data, we have to make sure the user doesn't change the criteria for search, or press the Fetch button again. To handle this situation, we disable the controls: function clear() { outlet.textContent = ''; cacheStatus.textContent = ''; networkStatus.textContent = ''; networkDataReceived = false; } function disableEdit(enable) { fetchButton.disabled = enable; cacheDelayText.disabled = enable; cacheDisabledCheckbox.disabled = enable; networkDelayText.disabled = enable; networkDisabledCheckbox.disabled = enable; if(!enable) { clear(); } } The returned data will be rendered to the screen in rows: function displayEvents(events) { events.forEach(function(event) { var tickets = event.ticket ? '<a href="' + event.ticket + '" class="tickets">Tickets</a>' : ''; outlet.innerHTML = outlet.innerHTML + '<article>' + '<span class="date">' + formatDate(event.date) + '</span>' + ' <span class="title">' + event.title + '</span>' + ' <span class="venue"> - ' + event.venue + '</span> ' + tickets + '</article>'; }); } Each item of the events array will be printed to the screen as rows. The function handleFetchComplete is the callback for both the cache and the network. If the disabled checkbox is checked, we are simulating a network error by throwing an error: var shouldNetworkError = networkDisabledCheckbox.checked, cloned; if (shouldNetworkError) { throw new Error('Network error'); } Because of the reason that request bodies can only be read once, we have to clone the response: cloned = response.clone(); We place the cloned response in the cache using cache.put as a key value pair. This helps subsequent cache fetches to find this update data: caches.open(CACHE_NAME).then(function(cache) { cache.put(SOURCE_URL, cloned); // cache.put(URL, response) }); Now we read the response in JSON format. Also, we make sure that any in-flight cache requests will not be overwritten by the data we have just received, using the networkDataReceived flag: response.json().then(function(data) { displayEvents(data); networkDataReceived = true; }); To prevent overwriting the data we received from the network, we make sure only to update the page in case the network request has not yet returned: result.json().then(function(data) { if (!networkDataReceived) { displayEvents(data); } }); When the user presses the fetch button, they make nearly simultaneous requests of the network and the cache for data. This happens on a page load in a real world application, instead of being the result of a user action: fetchButton.addEventListener('click', function handleClick() { ... } We start by disabling any user input while the network fetch requests are initiated: disableEdit(true); networkStatus.textContent = 'Fetching events...'; networkFetchStartTime = Date.now(); We request data with the fetch API, with a cache busting URL, as well as a no-cache option in order to support Firefox, which hasn't implemented the caching options yet: networkFetch = fetch(SOURCE_URL + '?cacheBuster=' + now, { mode: 'cors', cache: 'no-cache', headers: headers }) In order to simulate network delays, we wait before calling the network fetch callback. In situations where the callback errors out, we have to make sure that we reject the promise we received from the original fetch: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleFetchComplete(response); resolve(); } catch (err) { reject(err); } }, networkDelay); }); To simulate cache delays, we wait before calling the cache fetch callback. If the callback errors out, we make sure that we reject the promise we got from the original call to match: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleCacheFetchComplete(response); resolve(); } catch (err) { reject(err); } }, cacheDelay); }); The formatDate function is a helper function for us to convert the date format we receive in the response into a much more readable format on the screen: function formatDate(date) { var d = new Date(date), month = (d.getMonth() + 1).toString(), day = d.getDate().toString(), year = d.getFullYear(); if (month.length < 2) month = '0' + month; if (day.length < 2) day = '0' + day; return [month, day, year].join('-'); } If you consider a different date format, you can shuffle the position of the array in the return statement to your preferred format. Summary In this article, we have learned how to control cached content by creating a performance art event viewer web app. Resources for Article: Further resources on this subject: AngularJS Web Application Development Cookbook [Article] Being Offline [Article] Consuming Web Services using Microsoft Dynamics AX [Article]
Read more
  • 0
  • 0
  • 8853

article-image-angulars-component-architecture
Packt
07 Jul 2016
11 min read
Save for later

Angular's component architecture

Packt
07 Jul 2016
11 min read
In this article by Gion Kunz, author of the book Mastering Angular 2 Components, has explained the concept of directives from the first version of Angular changed the game in frontend UI frameworks. This was the first time that I felt that there was a simple yet powerful concept that allowed the creation of reusable UI components. Directives could communicate with DOM events or messaging services. They allowed you to follow the principle of composition, and you could nest directives and create larger directives that solely consisted of smaller directives arranged together. Actually, directives were a very nice implementation of components for the browser. (For more resources related to this topic, see here.) In this section, we'll look into the component-based architecture of Angular 2 and how the previous topic about general UI components will fit into Angular. Everything is a component As an early adopter of Angular 2 and while talking to other people about it, I got frequently asked what the biggest difference is to the first version. My answer to this question was always the same. Everything is a component. For me, this paradigm shift was the most relevant change that both simplified and enriched the framework. Of course, there are a lot of other changes with Angular 2. However, as an advocate of component-based user interfaces, I've found that this change is the most interesting one. Of course, this change also came with a lot of architectural changes. Angular 2 supports the idea of looking at the user interface holistically and supporting composition with components. However, the biggest difference to its first version is that now your pages are no longer global views, but they are simply components that are assembled from other components. If you've been following this chapter, you'll notice that this is exactly what a holistic approach to user interfaces demands. No more pages but systems of components. Angular 2 still uses the concept of directives, although directives are now really what the name suggests. They are orders for the browser to attach a given behavior to an element. Components are a special kind of directives that come with a view. Creating a tabbed interface component Let's introduce a new UI component in our ui folder in the project that will provide us with a tabbed interface that we can use for composition. We use what we learned about content projection in order to make this component reusable. We'll actually create two components, one for Tabs, which itself holds individual Tab components. First, let's create the component class within a new tabs/tab folder in a file called tab.js: import {Component, Input, ViewEncapsulation, HostBinding} from '@angular/core'; import template from './tab.html!text'; @Component({ selector: 'ngc-tab', host: { class: 'tabs__tab' }, template, encapsulation: ViewEncapsulation.None }) export class Tab { @Input() name; @HostBinding('class.tabs__tab--active') active = false; } The only state that we store in our Tab component is whether the tab is active or not. The name that is displayed on the tab will be available through an input property. We use a class property binding to make a tab visible. Based on the active flag we set a class; without this, our tabs are hidden. Let's take a look at the tab.html template file of this component: <ng-content></ng-content> This is it already? Actually, yes it is! The Tab component is only responsible for the storage of its name and active state, as well as the insertion of the host element content in the content projection point. There's no additional templating that is needed. Now, we'll move one level up and create the Tabs component that will be responsible for the grouping all the Tab components. As we won't include Tab components directly when we want to create a tabbed interface but use the Tabs component instead, this needs to forward content that we put into the Tabs host element. Let's look at how we can achieve this. In the tabs folder, we will create a tabs.js file that contains our Tabs component code, as follows: import {Component, ViewEncapsulation, ContentChildren} from '@angular/core'; import template from './tabs.html!text'; // We rely on the Tab component import {Tab} from './tab/tab'; @Component({ selector: 'ngc-tabs', host: { class: 'tabs' }, template, encapsulation: ViewEncapsulation.None, directives: [Tab] }) export class Tabs { // This queries the content inside <ng-content> and stores a // query list that will be updated if the content changes @ContentChildren(Tab) tabs; // The ngAfterContentInit lifecycle hook will be called once the // content inside <ng-content> was initialized ngAfterContentInit() { this.activateTab(this.tabs.first); } activateTab(tab) { // To activate a tab we first convert the live list to an // array and deactivate all tabs before we set the new // tab active this.tabs.toArray().forEach((t) => t.active = false); tab.active = true; } } Let's observe what's happening here. We used a new @ContentChildren annotation, in order to query our inserted content for directives that match the type that we pass to the decorator. The tabs property will contain an object of the QueryList type, which is an observable list type that will be updated if the content projection changes. You need to remember that content projection is a dynamic process as the content in the host element can actually change, for example, using the NgFor or NgIf directives. We use the AfterContentInit lifecycle hook, which we've already briefly discussed in the Custom UI elements section of Chapter 2, Ready, Set, Go! This lifecycle hook is called after Angular has completed content projection on the component. Only then we have the guarantee that our QueryList object will be initialized, and we can start working with child directives that were projected as content. The activateTab function will set the Tab components active flag, deactivating any previous active tab. As the observable QueryList object is not a native array, we first need to convert it using toArray() before we start working with it. Let's now look at the template of the Tabs component that we created in a file called tabs.html in the tabs directory: <ul class="tabs__tab-list"> <li *ngFor="let tab of tabs"> <button class="tabs__tab-button" [class.tabs__tab-button--active]="tab.active" (click)="activateTab(tab)">{{tab.name}}</button> </li> </ul> <div class="tabs__l-container"> <ng-content select="ngc-tab"></ng-content> </div> The structure of our Tabs component is as follows. First we render all the tab buttons in an unordered list. After the unordered list, we have a tabs container that will contain all our Tab components that are inserted using content projection and the <ng-content> element. Note that the selector that we use is actually the selector we use for our Tab component. Tabs that are not active will not be visible because we control this using CSS on our Tab component class attribute binding (refer to the Tab component code). This is all that we need to create a flexible and well-encapsulated tabbed interface component. Now, we can go ahead and use this component in our Project component to provide a segregation of our project detail information. We will create three tabs for now where the first one will embed our task list. We will address the content of the other two tabs in a later chapter. Let's modify our Project component template in the project.html file as a first step. Instead of including our TaskList component directly, we now use the Tabs and Tab components to nest the task list into our tabbed interface: <ngc-tabs> <ngc-tab name="Tasks"> <ngc-task-list [tasks]="tasks" (tasksUpdated)="updateTasks($event)"> </ngc-task-list> </ngc-tab> <ngc-tab name="Comments"></ngc-tab> <ngc-tab name="Activities"></ngc-tab> </ngc-tabs> You should have noticed by now that we are actually nesting two components within this template code using content projection, as follows: First, the Tabs component uses content projection to select all the <ngc-tab> elements. As these elements happen to be components too (our Tab component will attach to elements with this name), they will be recognized as such within the Tabs component once they are inserted. In the <ngc-tab> element, we then nest our TaskList component. If we go back to our Task component template, which will be attached to elements with the name ngc-tab, we will have a generic projection point that inserts any content that is present in the host element. Our task list will effectively be passed through the Tabs component into the Tab component. The visual efforts timeline Although the components that we created so far to manage efforts provide a good way to edit and display effort and time durations, we can still improve this with some visual indication. In this section, we will create a visual efforts timeline using SVG. This timeline should display the following information: The total estimated duration as a grey background bar The total effective duration as a green bar that overlays on the total estimated duration bar A yellow bar that shows any overtime (if the effective duration is greater than the estimated duration) The following two figures illustrate the different visual states of our efforts timeline component: The visual state if the estimated duration is greater than the effective duration The visual state if the effective duration exceeds the estimated duration (the overtime is displayed as a yellow bar) Let's start fleshing out our component by creating a new EffortsTimeline Component class on the lib/efforts/efforts-timeline/efforts-timeline.js path: … @Component({ selector: 'ngc-efforts-timeline', … }) export class EffortsTimeline { @Input() estimated; @Input() effective; @Input() height; ngOnChanges(changes) { this.done = 0; this.overtime = 0; if (!this.estimated && this.effective || (this.estimated && this.estimated === this.effective)) { // If there's only effective time or if the estimated time // is equal to the effective time we are 100% done this.done = 100; } else if (this.estimated < this.effective) { // If we have more effective time than estimated we need to // calculate overtime and done in percentage this.done = this.estimated / this.effective * 100; this.overtime = 100 - this.done; } else { // The regular case where we have less effective time than // estimated this.done = this.effective / this.estimated * 100; } } } Our component has three input properties: estimated: This is the estimated time duration in milliseconds effective: This is the effective time duration in milliseconds height: This is the desired height of the efforts timeline in pixels In the OnChanges lifecycle hook, we set two component member fields, which are based on the estimated and effective time: done: This contains the width of the green bar in percentage that displays the effective duration without overtime that exceeds the estimated duration overtime: This contains the width of the yellow bar in percentage that displays any overtime, which is any time duration that exceeds the estimated duration Let's look at the template of the EffortsTimeline component and see how we can now use the done and overtime member fields to draw our timeline. We will create a new lib/efforts/efforts-timeline/efforts-timeline.html file: <svg width="100%" [attr.height]="height"> <rect [attr.height]="height" x="0" y="0" width="100%" class="efforts-timeline__remaining"></rect> <rect *ngIf="done" x="0" y="0" [attr.width]="done + '%'" [attr.height]="height" class="efforts-timeline__done"></rect> <rect *ngIf="overtime" [attr.x]="done + '%'" y="0" [attr.width]="overtime + '%'" [attr.height]="height" class="efforts-timeline__overtime"></rect> </svg> Our template is SVG-based, and it contains three rectangles for each of the bars that we want to display. The background bar that will be visible if there is remaining effort will always be displayed. Above the remaining bar, we conditionally display the done and the overtime bar using the calculated widths from our component class. Now, we can go ahead and include the EffortsTimeline class in our Efforts component. This way our users will have visual feedback when they edit the estimated or effective duration, and it provides them a sense of overview. Let's look into the template of the Efforts component to see how we integrate the timeline: … <ngc-efforts-timeline height="10" [estimated]="estimated" [effective]="effective"> </ngc-efforts-timeline> As we have the estimated and effective duration times readily available in our Efforts component, we can simply create a binding to the EffortsTimeline component input properties: The Efforts component displaying our newly-created efforts timeline component (the overtime of six hours is visualized with the yellow bar) Summary In this article, we learned about the architecture of the components in Angular. We also learned how to create a tabbed interface component and how to create a visual efforts timeline using SVG. Resources for Article: Further resources on this subject: Angular 2.0[article] AngularJS Project[article] AngularJS[article]
Read more
  • 0
  • 0
  • 8756

article-image-user-interaction-and-email-automation-symfony-13-part2
Packt
18 Nov 2009
8 min read
Save for later

User Interaction and Email Automation in Symfony 1.3: Part2

Packt
18 Nov 2009
8 min read
Automated email responses Symfony comes with a default mailer library that is based on Swift Mailer 4, the detailed documentation is available from their web site at http://swiftmailer.org. After a user has signed up to our mailing list, we would like an email verification to be sent to the user's email address. This will inform the user that he/she has signed up, and will also ask him or her to activate their subscription. To use the library, we have to complete the following three steps: Store the mailing settings in the application settings file. Add the application logic to the action. Create the email template. Adding the mailer settings to the application Just like all the previous settings, we should add all the settings for sending emails to the module.yml file for the signup module. This will make it easier to implement any modifications required later. Initially, we should set variables like the email subject, the from name, the from address, and whether we want to send out emails within the dev environment. I have added the following items to our signup module's setting file, apps/frontend/config/module.yml: dev: mailer_deliver: true all: mailer_deliver: true mailer_subject: Milkshake Newsletter mailer_from_name: Tim mailer_from_email: no-reply@milkshake All of the settings can be contained under the all label. However, you can see that I have introduced a new label called dev. These labels represent the environments, and we have just added a specific variable to the dev environment. This setting will allow us to eventually turn off the sending of emails while in the dev environment. Creating the application logic Triggering the email should occur after the user's details have been saved to the database. To demonstrate this, I have added the highlighted amends to the submit action in the apps/frontend/modules/signup/actions/actions.class.php file, as shown in the following code: public function executeSubmit(sfWebRequest $request) { $this->form = new NewsletterSignupForm(); if ($request->isMethod('post') && $this->form-> bindAndSave($request->getParameter($this->form-> getName()))) { //Include the swift lib require_once('lib/vendor/swift-mailer/lib/swift_init.php'); try{ //Sendmail $transport = Swift_SendmailTransport::newInstance(); $mailBody = $this->getPartial('activationEmail', array('name' => $this->form->getValue('first_name'))); $mailer = Swift_Mailer::newInstance($transport); $message = Swift_Message::newInstance(); $message->setSubject(sfConfig::get('app_mailer_subject')); $message->setFrom(array(sfConfig:: get('app_mailer_from_email') => sfConfig::get('app_mailer_from_name'))); $message->setTo(array($this->form->getValue('email')=> $this-> form->getValue('first_name'))); $message->setBody($mailBody, 'text/html'); if(sfConfig::get('app_mailer_deliver')) { $result = $mailer->send($message); } } catch(Exception $e) { var_dump($e); exit; } $this->redirect('@signup'); } //Use the index template as it contains the form $this->setTemplate('index'); } Symfony comes with a sfMailer class that extends Swift_Mailer. To send mails you could simply implement the following Symfony method: $this->getMailer()->composeAndSend('from@example.com', 'to@example.com', 'Subject', 'Body'); Let's walk through the process: Instantiate the Swift Mailer. Retrieve the email template (which we will create next) using the $this->getPartial('activationEmail', array('name' => $this->form->getValue('first_name'))) method. Breaking this down, the function itself retrieves a partial template. The first argument is the name of the template to retrieve (that is activationEmail in our example) which, if you remember, means that the template will be called _activationEmail.php. The next argument is an array that contains variables related to the partial template. Here, I have set a name variable. The value for the name is important. Notice how I have used the value within the form object to retrieve the first_name value. This is because we know that these values have been cleaned and are safe to use. Set the subject, from, to, and the body items. These functions are Swift Mailer specific: setSubject(): It takes a string as an argument for the subject setFrom(): It takes the name and the mailing address setTo(): It takes the name and the mailing address setBody(): It takes the email body and mime type. Here we passed in our template and set the email to text/html Finally we send the email. There are more methods in Swift Mailer. Check out the documentation on the Swift Mailer web site (http://swiftmailer.org/). The partial email template Lastly, we need to create a partial template that will be used in the email body. In the templates folder of the signup module, create a file called _activationEmail.php and add the following code to it: Hi <?php echo $name; ?>, <br /><br /> Thank you for signing up to our newsletter. <br /><br /> Thank you, <br /> <strong>The Team</strong> The partial is no different from a regular template. We could have opted to pass on the body as a string, but using the template keeps our code uniform. Our signup process now incorporates the functionality to send an email. The purpose of this example is to show you how to send an automated email using a third-party library. For a real application, you should most certainly implement a two-phase option wherein the user must verify his or her action. Flashing temporary values Sometimes it is necessary to set a temporary variable for one request, or make a variable available to another action after forwarding but before having to delete the variable. Symfony provides this level of functionality within the sfUser object known as a flash variable. Once a flash variable has been set, it lasts until the end of the overall request before it is automatically destroyed. Setting and getting a flash attribute is managed through two of the sfUser methods. Also, you can test for a flash variable's existence using the third method of the methods listed here: $this->getUser()->setFlash($name, $value, $persist = true) $this->getUser()->getFlash($name) $this->getUser()->hasFlash($name) Although a flash variable will be available by default when a request is forwarded to another action, setting the argument to false will delete the flash variable before it is forwarded. To demonstrate how useful flash variables can be, let's readdress the signup form. After a user submits the signup form, the form is redisplayed. I further mentioned that you could create another action to handle a 'thank you' template. However, by using a flash variable we will not have to do so. As a part of the application logic for the form submission, we can set a flash variable. Then after the action redirects the request, the template can test whether there is a flash variable set. If there is one, the template should show a message rather than the form. Let's add the $this->getUser()->setFlash() function to the submit action in the apps/frontend/modules/signup/actions/actions.class.php file: //Include the swift lib require_once('lib/vendor/swift-mailer/lib/swift_init.php'); //set Flash $this->getUser()->setFlash('Form', 'completed'); try{ I have added the flash variable just under the require_once() statement. After the user has submitted a valid form, this flash variable will be set with the name of the Form and have a value completed. Next, we need to address the template logic. The template needs to check whether a flash variable called Form is set. If it is not set, the template shows the form. Otherwise it shows a thank you message. This is implemented using the following code: <?php if(!$sf_user->hasFlash('Form')): ?> <form action="<?php echo url_for('@signup_submit') ?>" method="post" name="Newsletter"> <div style="height: 30px;"> <div style="width: 150px; float: left"> <?php echo $form['first_name']->renderLabel() ?></div> <?php echo $form['first_name']->render(($form['first_name']-> hasError())? array('class'=>'boxError'): array ('class'=>'box')) ?> <?php echo ($form['first_name']->hasError())? ' <span class="errorMessage">* '.$form['first_name']->getError(). '</span>': '' ?> <div style="clear: both"></div> </div> .... </form> <?php else: ?><h1>Thank you</h1>You are now signed up.<?php endif ?> The form is now wrapped inside an if/else block. Accessing the flash variables from a template is done through $sf_user. To test if the variable has been set, I have used the hasFlash() method, $sf_user->hasFlash('Form'). The else part of the statement contains the text rather than the form. Now if you submit your form, you will see the result as shown in the following screenshot: We have now implemented an entire module for a user to sign up for our newsletter. Wouldn't it be really good if we could add this module to another application without all the copying, pasting, and fixing?
Read more
  • 0
  • 0
  • 8654
article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 8380

article-image-lets-build-angularjs-and-bootstrap
Packt
03 Jun 2015
14 min read
Save for later

Let's Build with AngularJS and Bootstrap

Packt
03 Jun 2015
14 min read
In this article by Stephen Radford, author of the book Learning Web Development with Bootstrap and AngularJS, we're going to use Bootstrap and AngularJS. We'll look at building a maintainable code base as well as exploring the full potential of both frameworks. (For more resources related to this topic, see here.) Working with directives Something we've been using already without knowing it is what Angular calls directives. These are essentially powerful functions that can be called from an attribute or even its own element, and Angular is full of them. Whether we want to loop data, handle clicks, or submit forms, Angular will speed everything up for us. We first used a directive to initialize Angular on the page using ng-app, and all of the directives we're going to look at in this article are used in the same way—by adding an attribute to an element. Before we take a look at some more of the built-in directives, we need to quickly make a controller. Create a new file and call it controller.js. Save this to your js directory within your project and open it up in your editor. Controllers are just standard JS constructor functions that we can inject Angular's services such as $scope into. These functions are instantiated when Angular detects the ng-controller attribute. As such, we can have multiple instances of the same controller within our application, allowing us to reuse a lot of code. This familiar function declaration is all we need for our controller. function AppCtrl(){} To let the framework know this is the controller we want to use, we need to include this on the page after Angular is loaded and also attach the ng-controller directive to our opening <html> tag: <html ng-controller="AppCtl">…<script type="text/javascript"src="assets/js/controller.js"></script> ng-click and ng-mouseover One of the most basic things you'll have ever done with JavaScript is listened for a click event. This could have been using the onclick attribute on an element, using jQuery, or even with an event listener. In Angular, we use a directive. To demonstrate this, we'll create a button that will launch an alert box—simple stuff. First, let's add the button to our content area we created earlier: <div class="col-sm-8"><button>Click Me</button></div> If you open this up in your browser, you'll see a standard HTML button created—no surprises there. Before we attach the directive to this element, we need to create a handler in our controller. This is just a function within our controller that is attached to the scope. It's very important we attach our function to the scope or we won't be able to access it from our view at all: function AppCtl($scope){$scope.clickHandler = function(){window.alert('Clicked!');};} As we already know, we can have multiple scopes on a page and these are just objects that Angular allows the view and the controller to have access to. In order for the controller to have access, we've injected the $scope service into our controller. This service provides us with the scope Angular creates on the element we added the ng-controller attribute to. Angular relies heavily on dependency injection, which you may or may not be familiar with. As we've seen, Angular is split into modules and services. Each of these modules and services depend upon one another and dependency injection provides referential transparency. When unit testing, we can also mock objects that will be injected to confirm our test results. DI allows us to tell Angular what services our controller depends upon, and the framework will resolve these for us. An in-depth explanation of AngularJS' dependency injection can be found in the official documentation at https://docs.angularjs.org/guide/di. Okay, so our handler is set up; now we just need to add our directive to the button. Just like before, we need to add it as an additional attribute. This time, we're going to pass through the name of the function we're looking to execute, which in this case is clickHandler. Angular will evaluate anything we put within our directive as an AngularJS expression, so we need to be sure to include two parentheses indicating that this is a function we're calling: <button ng-click="clickHandler()">Click Me</button> If you load this up in your browser, you'll be presented with an alert box when you click the button. You'll also notice that we don't need to include the $scope variable when calling the function in our view. Functions and variables that can be accessed from the view live within the current scope or any ancestor scope.   Should we wish to display our alert box on hover instead of click, it's just a case of changing the name of the directive to ng-mouseover, as they both function in the exact same way. ng-init The ng-init directive is designed to evaluate an expression on the current scope and can be used on its own or in conjunction with other directives. It's executed at a higher priority than other directives to ensure the expression is evaluated in time. Here's a basic example of ng-init in action: <div ng-init="test = 'Hello, World'"></div>{{test}} This will display Hello, World onscreen when the application is loaded in your browser. Above, we've set the value of the test model and then used the double curly-brace syntax to display it. ng-show and ng-hide There will be times when you'll need to control whether an element is displayed programmatically. Both ng-show and ng-hide can be controlled by the value returned from a function or a model. We can extend upon our clickHandler function we created to demonstrate the ng-click directive to toggle the visibility of our element. We'll do this by creating a new model and toggling the value between true or false. First of all, let's create the element we're going to be showing or hiding. Pop this below your button: <div ng-hide="isHidden">Click the button above to toggle.</div> The value within the ng-hide attribute is our model. Because this is within our scope, we can easily modify it within our controller: $scope.clickHandler = function(){$scope.isHidden = !$scope.isHidden;}; Here we're just reversing the value of our model, which in turn toggles the visibility of our <div>. If you open up your browser, you'll notice that the element isn't hidden by default. There are a few ways we could tackle this. Firstly, we could set the value of $scope.hidden to true within our controller. We could also set the value of hidden to true using the ng-init directive. Alternatively, we could switch to the ng-show directive, which functions in reverse to ng-hide and will only make an element visible if a model's value is set to true. Ensure Angular is loaded within your header or ng-hide and ng-show won't function correctly. This is because Angular uses its own classes to hide elements and these need to be loaded on page render. ng-if Angular also includes an ng-if directive that works in a similar fashion to ng-show and ng-hide. However, ng-if actually removes the element from the DOM whereas ng-show and ng-hide just toggles the elements' visibility. Let's take a quick look at how we'd use ng-if with the preceding code: <div ng-if="isHidden">Click the button above to toggle.</div> If we wanted to reverse the statement's meaning, we'd simply just need to add an exclamation point before our expression: <div ng-if="!isHidden">Click the button above to toggle.</div> ng-repeat Something you'll come across very quickly when building a web app is the need to render an array of items. For example, in our contacts manager, this would be a list of contacts, but it could be anything. Angular allows us to do this with the ng-repeat directive. Here's an example of some data we may come across. It's array of objects with multiple properties within it. To display the data, we're going to need to be able to access each of the properties. Thankfully, ng-repeat allows us to do just that. Here's our controller with an array of contact objects assigned to the contacts model: function AppCtrl($scope){$scope.contacts = [{name: 'John Doe',phone: '01234567890',email: 'john@example.com'},{name: 'Karan Bromwich',phone: '09876543210',email: 'karan@email.com'}];} We have just a couple of contacts here, but as you can imagine, this could be hundreds of contacts served from an API that just wouldn't be feasible to work with without ng-repeat. First, add an array of contacts to your controller and assign it to $scope.contacts. Next, open up your index.html file and create a <ul> tag. We're going to be repeating a list item within this unordered list so this is the element we need to add our directive to: <ul><li ng-repeat="contact in contacts"></li></ul> If you're familiar with how loops work in PHP or Ruby, then you'll feel right at home here. We create a variable that we can access within the current element being looped. The variable after the in keyword references the model we created on $scope within our controller. This now gives us the ability to access any of the properties set on that object with each iteration or item repeated gaining a new scope. We can display these on the page using Angular's double curly-brace syntax. <ul><li ng-repeat="contact in contacts">{{contact.name}}</li></ul> You'll notice that this outputs the name within our list item as expected, and we can easily access any property on our contact object by referencing it using the standard dot syntax. ng-class Often there are times where you'll want to change or add a class to an element programmatically. We can use the ng-class directive to achieve this. It will let us define a class to add or remove based on the value of a model. There are a couple of ways we can utilize ng-class. In its most simple form, Angular will apply the value of the model as a CSS class to the element: <div ng-class="exampleClass"></div> Should the model referenced be undefined or false, Angular won't apply a class. This is great for single classes, but what if you want a little more control or want to apply multiple classes to a single element? Try this: <div ng-class="{className: model, class2: model2}"></div> Here, the expression is a little different. We've got a map of class names with the model we wish to check against. If the model returns true, then the class will be added to the element. Let's take a look at this in action. We'll use checkboxes with the ng-model attribute, to apply some classes to a paragraph: <p ng-class="{'text-center': center, 'text-danger': error}">Lorem ipsum dolor sit amet</p> I've added two Bootstrap classes: text-center and text-danger. These observe a couple of models, which we can quickly change with some checkboxes: The single quotations around the class names within the expression are only required when using hyphens, or an error will be thrown by Angular. <label><input type="checkbox" ng-model="center"> textcenter</label><label><input type="checkbox" ng-model="error"> textdanger</label> When these checkboxes are checked, the relevant classes will be applied to our element. ng-style In a similar way to ng-class, this directive is designed to allow us to dynamically style an element with Angular. To demonstrate this, we'll create a third checkbox that will apply some additional styles to our paragraph element. The ng-style directive uses a standard JavaScript object, with the keys being the property we wish to change (for example, color and background). This can be applied from a model or a value returned from a function. Let's take a look at hooking it up to a function that will check a model. We can then add this to our checkbox to turn the styles off and on. First, open up your controller.js file and create a new function attached to the scope. I'm calling mine styleDemo: $scope.styleDemo = function(){if(!$scope.styler){return;}return {background: 'red',fontWeight: 'bold'};}; Inside the function, we need to check the value of a model; in this example, it's called styler. If it's false, we don't need to return anything, otherwise we're returning an object with our CSS properties. You'll notice that we used fontWeight rather than font-weight in our returned object. Either is fine, and Angular will automatically switch the CamelCase over to the correct CSS property. Just remember than when using hyphens in JavaScript object keys, you'll need to wrap them in quotation marks. This model is going to be attached to a checkbox, just like we did with ng-class: <label><input type="checkbox" ng-model="styler"> ng-style</label> The last thing we need to do is add the ng-style directive to our paragraph element: <p .. ng-style="styleDemo()">Lorem ipsum dolor sit amet</p> Angular is clever enough to recall this function every time the scope changes. This means that as soon as our model's value changes from false to true, our styles will be applied and vice versa. ng-cloak The final directive we're going to look at is ng-cloak. When using Angular's templates within our HTML page, the double curly braces are temporarily displayed before AngularJS has finished loading and compiling everything on our page. To get around this, we need to temporarily hide our template before it's finished rendering. Angular allows us to do this with the ng-cloak directive. This sets an additional style on our element whilst it's being loaded: display: none !important;. To ensure there's no flashing while content is being loaded, it's important that Angular is loaded in the head section of our HTML page. Summary We've covered a lot in this article, let's recap it all. Bootstrap allowed us to quickly create a responsive navigation. We needed to include the JavaScript file included with our Bootstrap download to enable the toggle on the mobile navigation. We also looked at the powerful responsive grid system included with Bootstrap and created a simple two-column layout. While we were doing this, we learnt about the four different column class prefixes as well as nesting our grid. To adapt our layout, we discovered some of the helper classes included with the framework to allow us to float, center, and hide elements. In this article, we saw in detail Angular's built-in directives: functions Angular allows us to use from within our view. Before we could look at them, we needed to create a controller, which is just a function that we can pass Angular's services into using dependency injection. Directives such as ng-click and ng-mouseover are essentially just new ways of handling events that you will have no doubt done using either jQuery or vanilla JavaScript. However, directives such as ng-repeat will probably be a completely new way of working. It brings some logic directly within our view to loop through data and display it on the page. We also looked at directives that observe models on our scope and perform different actions based on their values. Directives like ng-show and ng-hide will show or hide an element based on a model's value. We also saw this in action in ng-class, which allowed us to add some classes to our elements based on our models' values. Resources for Article: Further resources on this subject: AngularJS Performance [Article] AngularJS Web Application Development Cookbook [Article] Role of AngularJS [Article]
Read more
  • 0
  • 0
  • 8267

article-image-selecting-dom-elements-using-mootools-12-part-1
Packt
07 Jan 2010
7 min read
Save for later

Selecting DOM Elements using MooTools 1.2: Part 1

Packt
07 Jan 2010
7 min read
In order to successfully and effortlessly write unobtrusive JavaScript, we must have a way to point to the Document Object Model (DOM) elements that we want to manipulate. The DOM is a representation of objects in our HTML and a way to access and manipulate them. In traditional JavaScript, this involves a lot (like, seriously a lot) of code authoring, and in many instances, a lot of head-banging-against-wall-and-pulling-out-hair as you discover a browser quirk that you must solve. Let me save you some bumps, bruises, and hair by showing you how to select DOM elements the MooTools way. This article will cover how you can utilize MooTools to select/match simple (like, "All div elements") up to the most complex and specific elements (like, "All links that are children of a span that has a class of awesomeLink and points to http://superAwesomeSite.com MooTools and CSS selectors MooTools selects an element (or a set of elements) in the DOM using CSS selectors syntax. Just a quick refresher on CSS terminology, a CSS style rule consists of: selector { property: property value; } selector: indicates what elements will be affected by the style rule. property: the CSS property (also referred to as attribute). For example, color is a CSS property, so is font-style. You can have multiple property declarations in one style rule property value: the value you want assigned to the property. For example, bold is a possible CSS property value of the font-weight CSS property. For example, if you wanted to select a paragraph element with an ID of awesomeParagraph to give it a red color (in hexadecimal, this is #ff0), in CSS syntax you'd write: #awesomeParagraph { color: #ff0; } Also, if I wanted to increase its specificity and make sure that only paragraph elements that have an ID of awesomeParagraph is selected: p#awesomeParagraph { color: #ff0;} You'll be happy to know that this same syntax ports over to MooTools. What's more is that you'll be able to take advantage of all of CSS3's more complex selection syntax because even though CSS3 isn't supported by all browsers yet, MooTools supports them already. So you don't have to learn another syntax for writing selectors, you can use your existing knowledge of CSS - awesome, isn't it? Working with the $() and $$() functions The $() and $$() functions are the bread and butter of MooTools. When you're working with unobtrusive JavaScript, you need to specify which elements you want to operate on. The dollar and dollars functions help you do just that - they will allow you to specify what elements you want to work on. Notice, the dollar sign $ is shaped like an S, which we can interpret to mean 'select'. The $() dollar function The dollar function is used for getting an element by it's ID, which returns a single element that is extended with MooTools Element methods or null if nothing matches. Let's go back to awesomeParagraph in the earlier example. If I wanted to select awesomeParagraph, this is what I would write: $('awesomeParagraph’) By doing so, we can now operate on it by passing methods to the selection. For example, if you wanted to change its style to have a color of red. You can use the .setStyle() method which allows you to specify a CSS property and its matching property value, like so $('awesomeParagraph').setStyle('color', '#ff0'); The $$() dollars function The $$() function is the $() big brother (that's why it gets an extra dollar sign). The $$() function can do more robust and complex selections, can select a group, or groups of element’s and always returns an array object, with or without selected elements in it. Likewise, it can be interchanged with the dollar function. If we wanted to select awesomeParagraph using the dollars function, we would write: $$('#awesomeParagraph') Note that you have to use the pound sign (#) in this case as if you were using CSS selectors. When to use which If you need to select just one element that has an ID, you should use the $() function because it performs better in terms of speed than the $$() function. Use the $$() function to select multiple elements. In fact, when in doubt, use the$$() function because it can do what the $() function can do (and more). A note on single quotes (') versus double quotes ("")The example above would work even if we used double quotes such as $("awesomeParagraph") or $$("#awesomeParagraph") ") - but many MooTools developers prefer single quotes so they don’t have to escape characters as much (since the double quote is often used in HTML, you'll have to do "in order not to prematurely end your strings). It's highly recommended that you use single quotes - but hey, it's your life! Now, let's see these functions in action. Let's start with the HTML markup first. Put the following block of code in an HTML document: <body> <ul id="superList"> <li>List item</li> <li><a href="#">List item link</a></li> <li id="listItem">List item with an ID.</li> <li class="lastItem">Last list item.</li> </ul></body> What we have here is an unordered list. We'll use it to explore the dollar and dollars function. If you view this in your web browser, you should see something like this: Time for action – selecting an element with the dollar function Let's select the list item with an ID of listItem and then give it a red color using the .setStyle() MooTools method. Inside your window.addEvent('domready') method, use the following code: $('listItem').setStyle('color', 'red'); Save the HTML document and view it on your web browser. You should see the 3rd list item will be red. Now let's select the entire unordered list (it has an ID of superList), then give it a black border (in hexadecimal, this is #000000). Place the following code, right below the code the line we wrote in step 1: $('superList').setStyle('border', '1px solid #000000'); If you didn't close your HTML document, hit your browser's refresh. You should now see something like this: Time for action – selecting elements with the dollars function We'll be using the same HTML document, but this time, let's explore the dollars function. We're going to select the last list item (it has a class of lastItem). Using the .getFirst(), we select the first element from the array $$() returned. Then, we're going to use the .get() method to get its text. To show us what it is, we'll pass it to the alert() function. The code to write to accomplish this is: alert( $$('.lastItem').getFirst().get('text') ); Save the HTML document and view it in your web browser (or just hit your browser's refresh button if you still have the HTML document from the preview Time for action open). You should now see the following: Time for action – selecting multiple sets of elements with the dollars function What if we wanted to select multiple sets of elements and run the same method (or methods) on them? Let's do that now. Let's select the list item that has an <a> element inside it and the last list item (class="lastItem") and then animate them to the right by transitioning their margin-left CSS property using the .tween() method. Right below the line we wrote previously, place the following line: $$('li a, .lastItem').tween('margin-left', '50px'); View your work in your web browser. You should see the second and last list item move to the right by 50 pixels. What just happened? We just explored the dollar and dollars function to see how to select different elements and apply methods on them. You just learned to select: An element with an ID (#listItem and #superList) using the dollar $() function An element with a class (.lastItem) using the dollars $$() function Multiple elements by separating them with commas (li and .lastItem) You also saw how you can execute methods on your selected elements. In the example, we used the .setStyle(), .getFirst(), get(), and .tween() MooTools methods. Because DOM selection is imperative to writing MooTools code, before we go any further, let's talk about some important points to note about what we just did.
Read more
  • 0
  • 0
  • 7823
article-image-writing-custom-spring-boot-starters
Packt
16 Sep 2015
10 min read
Save for later

Writing Custom Spring Boot Starters

Packt
16 Sep 2015
10 min read
 In this article by Alex Antonov, author of the book Spring Boot Cookbook, we will cover the following topics: Understanding Spring Boot autoconfiguration Creating a custom Spring Boot autoconfiguration starter (For more resources related to this topic, see here.) Introduction Its time to take a look behind the scenes and find out the magic behind the Spring Boot autoconfiguration and write some starters of our own as well. This is a very useful capability to possess, especially for large software enterprises where the presence of a proprietary code is inevitable and it is very helpful to be able to create internal custom starters that would automatically add some of the configuration or functionalities to the applications. Some likely candidates can be custom configuration systems, libraries, and configurations that deal with connecting to databases, using custom connection pools, http clients, servers, and so on. We will go through the internals of Spring Boot autoconfiguration, take a look at how new starters are created, explore conditional initialization and wiring of beans based on various rules, and see that annotations can be a powerful tool, which provides the consumers of the starters more control over dictating what configurations should be used and where. Understanding Spring Boot autoconfiguration Spring Boot has a lot of power when it comes to bootstrapping an application and configuring it with exactly the things that are needed, all without much of the glue code that is required of us, the developers. The secret behind this power actually comes from Spring itself or rather from the Java Configuration functionality that it provides. As we add more starters as dependencies, more and more classes will appear in our classpath. Spring Boot detects the presence or absence of specific classes and based on this information, makes some decisions, which are fairly complicated at times, and automatically creates and wires the necessary beans to the application context. Sounds simple, right? How to do it… Conveniently, Spring Boot provides us with an ability to get the AUTO-CONFIGURATION REPORT by simply starting the application with the debug flag. This can be passed to the application either as an environment variable, DEBUG, as a system property, -Ddebug, or as an application property, --debug. Start the application by running DEBUG=true ./gradlew clean bootRun. Now, if you look at the console logs, you will see a lot more information printed there that is marked with the DEBUG level log. At the end of the startup log sequence, we will see the AUTO-CONFIGURATION REPORT as follows: ========================= AUTO-CONFIGURATION REPORT ========================= Positive matches: ----------------- … DataSourceAutoConfiguration - @ConditionalOnClass classes found: javax.sql.DataSource,org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType (OnClassCondition) … Negative matches: ----------------- … GsonAutoConfiguration - required @ConditionalOnClass classes not found: com.google.gson.Gson (OnClassCondition) … How it works… As you can see, the amount of information that is printed in the debug mode can be somewhat overwhelming; so I've selected only one example of positive and negative matches each. For each line of the report, Spring Boot tells us why certain configurations have been selected to be included, what they have been positively matched on, or, for the negative matches, what was missing that prevented a particular configuration to be included in the mix. Let's look at the positive match for DataSourceAutoConfiguration: The @ConditionalOnClass classes found tells us that Spring Boot has detected the presence of a particular class, specifically two classes in our case: javax.sql.DataSource and org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType. The OnClassCondition indicates the kind of matching that was used. This is supported by the @ConditionalOnClass and @ConditionalOnMissingClass annotations. While OnClassCondition is the most common kind of detection, Spring Boot also uses many other conditions. For example, OnBeanCondition is used to check the presence or absence of specific bean instances, OnPropertyCondition is used to check the presence, absence, or a specific value of a property as well as any number of the custom conditions that can be defined using the @Conditional annotation and Condition interface implementations. The negative matches show us a list of configurations that Spring Boot has evaluated, which means that they do exist in the classpath and were scanned by Spring Boot but didn't pass the conditions required for their inclusion. GsonAutoConfiguration, while available in the classpath as it is a part of the imported spring-boot-autoconfigure artifact, was not included because the required com.google.gson.Gson class was not detected as present in the classpath, thus failing the OnClassCondition. The implementation of the GsonAutoConfiguration file looks as follows: @Configuration @ConditionalOnClass(Gson.class) public class GsonAutoConfiguration { @Bean @ConditionalOnMissingBean public Gson gson() { return new Gson(); } } After looking at the code, it is very easy to make the connection between the conditional annotations and report information that is provided by Spring Boot at the start time. Creating a custom Spring Boot autoconfiguration starter We have a high-level idea of the process by which Spring Boot decides which configurations to include in the formation of the application context. Now, let's take a stab at creating our own Spring Boot starter artifact, which we can include as an autoconfigurable dependency in our build. Let's build a simple starter that will create another CommandLineRunner that will take the collection of all the Repository instances and print out the count of the total entries for each. We will start by adding a child Gradle project to our existing project that will house the codebase for the starter artifact. We will call it db-count-starter. How to do it… We will start by creating a new directory named db-count-starter in the root of our project. As our project has now become what is known as a multiproject build, we will need to create a settings.gradle configuration file in the root of our project with the following content: include 'db-count-starter' We should also create a separate build.gradle configuration file for our subproject in the db-count-starter directory in the root of our project with the following content: apply plugin: 'java' repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { compile("org.springframework.boot:spring- boot:1.2.3.RELEASE") compile("org.springframework.data:spring-data- commons:1.9.2.RELEASE") } Now we are ready to start coding. So, the first thing is to create the directory structure, src/main/java/org/test/bookpubstarter/dbcount, in the db-count-starter directory in the root of our project. In the newly created directory, let's add our implementation of the CommandLineRunner file named DbCountRunner.java with the following content: public class DbCountRunner implements CommandLineRunner { protected final Log logger = LogFactory.getLog(getClass()); private Collection<CrudRepository> repositories; public DbCountRunner(Collection<CrudRepository> repositories) { this.repositories = repositories; } @Override public void run(String... args) throws Exception { repositories.forEach(crudRepository -> logger.info(String.format("%s has %s entries", getRepositoryName(crudRepository.getClass()), crudRepository.count()))); } private static String getRepositoryName(Class crudRepositoryClass) { for(Class repositoryInterface : crudRepositoryClass.getInterfaces()) { if (repositoryInterface.getName(). startsWith("org.test.bookpub.repository")) { return repositoryInterface.getSimpleName(); } } return "UnknownRepository"; } } With the actual implementation of DbCountRunner in place, we will now need to create the configuration object that will declaratively create an instance during the configuration phase. So, let's create a new class file called DbCountAutoConfiguration.java with the following content: @Configuration public class DbCountAutoConfiguration { @Bean public DbCountRunner dbCountRunner(Collection<CrudRepository> repositories) { return new DbCountRunner(repositories); } } We will also need to tell Spring Boot that our newly created JAR artifact contains the autoconfiguration classes. For this, we will need to create a resources/META-INF directory in the db-count-starter/src/main directory in the root of our project. In this newly created directory, we will place the file named spring.factories with the following content: org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.test.bookpubstarter.dbcount.DbCountAutoConfiguration For the purpose of our demo, we will add the dependency to our starter artifact in the main project's build.gradle by adding the following entry in the dependencies section: compile project(':db-count-starter') Start the application by running ./gradlew clean bootRun. Once the application is complied and has started, we should see the following in the console logs: 2015-04-05 INFO org.test.bookpub.StartupRunner : Welcome to the Book Catalog System! 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : AuthorRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : PublisherRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : BookRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : ReviewerRepository has 0 entries 2015-04-05 INFO org.test.bookpub.BookPubApplication : Started BookPubApplication in 8.528 seconds (JVM running for 9.002) 2015-04-05 INFO org.test.bookpub.StartupRunner           : Number of books: 1 How it works… Congratulations! You have now built your very own Spring Boot autoconfiguration starter. First, let's quickly walk through the changes that we made to our Gradle build configuration and then we will examine the starter setup in detail. As the Spring Boot starter is a separate, independent artifact, just adding more classes to our existing project source tree would not really demonstrate much. To make this separate artifact, we had a few choices: making a separate Gradle configuration in our existing project or creating a completely separate project altogether. The most ideal solution, however, was to just convert our build to Gradle Multi-Project Build by adding a nested project directory and subproject dependency to build.gradle of the root project. By doing this, Gradle actually creates a separate artifact JAR for us but we don't have to publish it anywhere, only include it as a compile project(':db-count-starter') dependency. For more information about Gradle multi-project builds, you can check out the manual at http://gradle.org/docs/current/userguide/multi_project_builds.html. Spring Boot Auto-Configuration Starter is nothing more than a regular Spring Java Configuration class annotated with the @Configuration annotation and the presence of spring.factories in the classpath in the META-INF directory with the appropriate configuration entries. During the application startup, Spring Boot uses SpringFactoriesLoader, which is a part of Spring Core, in order to get a list of the Spring Java Configurations that are configured for the org.springframework.boot.autoconfigure.EnableAutoConfiguration property key. Under the hood, this call collects all the spring.factories files located in the META-INF directory from all the jars or other entries in the classpath and builds a composite list to be added as application context configurations. In addition to the EnableAutoConfiguration key, we can declare the following automatically initializable other startup implementations in a similar fashion: org.springframework.context.ApplicationContextInitializer org.springframework.context.ApplicationListener org.springframework.boot.SpringApplicationRunListener org.springframework.boot.env.PropertySourceLoader org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider org.springframework.test.contex.TestExecutionListener Ironically enough, a Spring Boot Starter does not need to depend on the Spring Boot library as its compile time dependency. If we look at the list of class imports in the DbCountAutoConfiguration class, we will not see anything from the org.springframework.boot package. The only reason that we have a dependency declared on Spring Boot is because our implementation of DbCountRunner implements the org.springframework.boot.CommandLineRunner interface. Summary Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Time Travelling with Spring[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article]
Read more
  • 0
  • 0
  • 7741

article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 7730
Modal Close icon
Modal Close icon