Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-setting-mongodb
Packt
12 Aug 2016
10 min read
Save for later

Setting up MongoDB

Packt
12 Aug 2016
10 min read
In this article by Samer Buna author of the book Learning GraphQL and Relay, we're mostly going to be talking about how an API is nothing without access to a database. Let's set up a local MongoDB instance, add some data in there, and make sure we can access that data through our GraphQL schema. (For more resources related to this topic, see here.) MongoDB can be locally installed on multiple platforms. Check the documentation site for instructions for your platform (https://docs.mongodb.com/manual/installation/). For Mac, the easiest way is probably Homebrew: ~ $ brew install mongodb Create a db folder inside a data folder. The default location is /data/db: ~ $ sudo mkdir -p /data/db Change the owner of the /data folder to be the current logged-in user: ~ $ sudo chown -R $USER /data Start the MongoDB server: ~ $ mongod If everything worked correctly, we should be able to open a new terminal and test the mongo CLI: ~/graphql-project $ mongo MongoDB shell version: 3.2.7 connecting to: test > db.getName() test > We're using MongoDB version 3.2.7 here. Make sure that you have this version or newer versions of MongoDB. Let's go ahead and create a new collection to hold some test data. Let's name that collection users: > db.createCollection("users")" { "ok" : 1 } Now we can use the users collection to add documents that represent users. We can use the MongoDB insertOne() function for that: > db.users.insertOne({ firstName: "John"," lastName: "Doe"," email: "john@example.com" }) We should see an output like: { "acknowledged" : true, "insertedId" : ObjectId("56e729d36d87ae04333aa4e1") } Let's go ahead and add another user: > db.users.insertOne({ firstName: "Jane"," lastName: "Doe"," email: "jane@example.com" }) We can now verify that we have two user documents in the users collection using: > db.users.count() 2 MongoDB has a built-in unique object ID which you can see in the output for insertOne(). Now that we have a running MongoDB, and we have some test data in there, it's time to see how we can read this data using a GraphQL API. To communicate with a MongoDB from a Node.js application, we need to install a driver. There are many options that we can choose from, but GraphQL requires a driver that supports promises. We will use the official MongoDB Node.js driver which supports promises. Instructions on how to install and run the driver can be found at: https://docs.mongodb.com/ecosystem/drivers/node-js/. To install the MongoDB official Node.js driver under our graphql-project app, we do: ~/graphql-project $ npm install --save mongodb └─┬ mongodb@2.2.4 We can now use this mongodb npm package to connect to our local MongoDB server from within our Node application. In index.js: const mongodb = require('mongodb'); const assert = require('assert'); const MONGO_URL = 'mongodb'://localhost:27017/test'; mongodb.MongoClient.connect(MONGO_URL, (err, db) => { assert.equal(null, err); console.log('Connected' to MongoDB server'); // The readline interface code }); The MONGO_URL variable value should not be hardcoded in code like this. Instead, we can use a node process environment variable to set it to a certain value before executing the code. On a production machine, we would be able to use the same code and set the process environment variable to a different value. Use the export command to set the environment variable value: export MONGO_URL=mongodb://localhost:27017/test Then in the Node code, we can read the exported value by using: process.env.MONGO_URL If we now execute the node index.js command, we should see the Connected to MongoDB server line right before we ask for the Client Request. At this point, the Node.js process will not exit after our interaction with it. We'll need to force exit the process with Ctrl + C to restart it. Let's start our database API with a simple field that can answer this question: How many total users do we have in the database? The query could be something like: { usersCount } To be able to use a MongoDB driver call inside our schema main.js file, we need access to the db object that the MongoClient.connect() function exposed for us in its callback. We can use the db object to count the user documents by simply running the promise: db.collection('users').count() .then(usersCount => console.log(usersCount)); Since we only have access to the db object in index.js within the connect() function's callback, we need to pass a reference to that db object to our graphql() function. We can do that using the fourth argument for the graphql() function, which accepts a contextValue object of globals, and the GraphQL engine will pass this context object to all the resolver functions as their third argument. Modify the graphql function call within the readline interface in index.js to be: graphql.graphql(mySchema, inputQuery, {}, { db }).then(result => { console.log('Server' Answer :', result.data); db.close(() => rli.close()); }); The third argument to the graphql() function is called the rootValue, which gets passed as the first argument to the resolver function on the top level type. We are not using that feature here. We passed the connected database object db as part of the global context object. This will enable us to use db within any resolver function. Note also how we're now closing the rli interface within the callback for the operation that closes the db. We should not leave any open db connections behind. Here's how we can now use the resolver third argument to resolve our usersCount top-level field with the db count() operation: fields: { // "hello" and "diceRoll"..." usersCount: { type: GraphQLInt, resolve: (_, args, { db }) => db.collection('users').count() } } A couple of things to notice about this code: We destructured the db object from the third argument for the resolve() function so that we can use it directly (instead of context.db). We returned the promise itself from the resolve() function. The GraphQL executor has native support for promises. Any resolve() function that returns a promise will be handled by the executor itself. The executor will either successfully resolve the promise and then resolve the query field with the promise-resolved value, or it will reject the promise and return an error to the user. We can test our query now: ~/graphql-project $ node index.js Connected to MongoDB server Client Request: { usersCount } Server Answer : { usersCount: 2 } *** #GitTag: chapter1-setting-up-mongodb *** Setting up an HTTP interface Let's now see how we can use the graphql() function under another interface, an HTTP one. We want our users to be able to send us a GraphQL request via HTTP. For example, to ask for the same usersCount field, we want the users to do something like: /graphql?query={usersCount} We can use the Express.js node framework to handle and parse HTTP requests, and within an Express.js route, we can use the graphql() function. For example (don't add these lines yet): const app = express(); app.use('/graphql', (req, res) => { // use graphql.graphql() to respond with JSON objects }); However, instead of manually handling the req/res objects, there is a GraphQL Express.js middleware that we can use, express-graphql. This middleware wraps the graphql() function and prepares it to be used by Express.js directly. Let's go ahead and bring in both the Express.js library and this middleware: ~/graphql-project $ npm install --save express express-graphql ├─┬ express@4.14.0 └─┬ express-graphql@0.5.3 In index.js, we can now import both express and the express-graphql middleware: const graphqlHTTP = require('express-graphql'); const express = require('express'); const app = express(); With these imports, the middleware main function will now be available as graphqlHTTP(). We can now use it in an Express route handler. Inside the MongoClient.connect() callback, we can do: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db } })); app.listen(3000, () => console.log('Running Express.js on port 3000') ); Note that at this point we can remove the readline interface code as we are no longer using it. Our GraphQL interface from now on will be an HTTP endpoint. The app.use line defines a route at /graphql and delegates the handling of that route to the express-graphql middleware that we imported. We pass two objects to the middleware, the mySchema object, and the context object. We're not passing any input query here because this code just prepares the HTTP endpoint, and we will be able to read the input query directly from a URL field. The app.listen() function is the call we need to start our Express.js app. Its first argument is the port to use, and its second argument is a callback we can use after Express.js has started. We can now test our HTTP-mounted GraphQL executor with: ~/graphql-project $ node index.js Connected to MongoDB server Running Express.js on port 3000 In a browser window go to: http://localhost:3000/graphql?query={usersCount} *** #GitTag: chapter1-setting-up-an-http-interface *** The GraphiQL editor The graphqlHTTP() middleware function accepts another property on its parameter object graphiql, let's set it to true: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db }, graphiql: true })); When we restart the server now and navigate to http://localhost:3000/graphql, we'll get an instance of the GraphiQL editor running locally on our GraphQL schema: GraphiQL is an interactive playground where we can explore our GraphQL queries and mutations before we officially use them. GraphiQL is written in React and GraphQL, and it runs completely within the browser. GraphiQL has many powerful editor features such as syntax highlighting, code folding, and error highlighting and reporting. Thanks to GraphQL introspective nature, GraphiQL also has intelligent type-ahead of fields, arguments, and types. Put the cursor in the left editor area, and type a selection set: { } Place the cursor inside that selection set and press Ctrl + space. You should see a list of all fields that our GraphQL schema support, which are the three fields that we have defined so far (hello, diceRoll, and usersCount): If Ctrl +space does not work, try Cmd + space, Alt + space, or Shift + space. The __schema and __type fields can be used to introspectively query the GraphQL schema about what fields and types it supports. When we start typing, this list starts to get filtered accordingly. The list respects the context of the cursor, if we place the cursor inside the arguments of diceRoll(), we'll get the only argument we defined for diceRoll, the count argument. Go ahead and read all the root fields that our schema support, and see how the data gets reported on the right side with the formatted JSON object: *** #GitTag: chapter1-the-graphiql-editor *** Summary In this article, we learned how to set up a local MongoDB instance, add some data in there, so that we can access that data through our GraphQL schema. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Getting Started with Java Driver for MongoDB [article] Documents and Collections in Data Modeling with MongoDB [article]
Read more
  • 0
  • 0
  • 14694

article-image-magento-theme-development
Packt
24 Feb 2016
7 min read
Save for later

Magento Theme Development

Packt
24 Feb 2016
7 min read
In this article by Fernando J. Miguel, author of the book Magento 2 Development Essentials, we will learn the basics of theme development. Magento can be customized as per your needs because it is based on the Zend framework, adopting the Model-View-Controller (MVC) architecture as a software design pattern. When planning to create your own theme, the Magento theme process flow becomes a subject that needs to be carefully studied. Let's focus on the concepts that help you create your own theme. (For more resources related to this topic, see here.) The Magento base theme The Magento Community Edition (CE) version 1.9 comes with a new theme named rwd that implements the Responsive Web Design (RWD) practices. Magento CE's responsive theme uses a number of new technologies as follows: Sass/Compass: This is a CSS precompiler that provides a reusable CSS that can even be organized well. jQuery: This is used for customization of JavaScript in the responsive theme. jQuery operates in the noConflict() mode, so it doesn't conflict with Magento's existing JavaScript library. Basically, the folders that contain this theme are as follows: app/design/frontend/rwd skin/frontend/rwd The following image represents the folder structure: As you can see, all the files of the rwd theme are included in the app/design/frontend and skin/frontend folders: app/design/frontend: This folder stores all the .phtml visual files and .xml configurations files of all the themes. skin/frontend: This folder stores all JavaScript, CSS, and image files from their respective app/design/frontend themes folders. Inside these folders, you can see another important folder called base. The rwd theme uses some base theme features to be functional. How is it possible? Logically, Magento has distinct folders for every theme, but Magento is very smart to reuse code. Magento takes advantage of fall-back system. Let's check how it works. The fall-back system The frontend of Magento allows the designers to create new themes based on the basic theme, reusing the main code without changing its structure. The fall-back system allows us to create only the files that are necessary for the customization. To create the customization files, we have the following options: Create a new theme directory and write the entire new code Copy the files from the base theme and edit them as you wish The second option could be more productive for study purposes. You will learn basic structure by exercising the code edit. For example, let's say you want to change the header.phtml file. You can copy the header.html file from the app/design/frontend/base/default/template/page/html path to the app/design/frontend/custom_package/custom_theme/template/page/html path. In this example, if you activate your custom_theme on Magento admin panel, your custom_theme inherits all the structure from base theme, and applies your custom header.phtml on the theme. Magento packages and design themes Magento has the option to create design packages and themes as you saw on the previous example of custom_theme. This is a smart functionality because on same packages you can create more than one theme. Now, let's take a deep look at the main folders that manage the theme structure in Magento. The app/design structure In the app/design structure, we have the following folders: The folder details are as follows: adminhtml: In this folder, Magento keeps all the layout configuration files and .phtml structure of admin area. frontend: In this folder, Magento keeps all the theme's folders and their respective .phtml structure of site frontend. install: This folder stores all the files of installation Magento screen. The layout folder Let's take a look at the rwd theme folder: As you can see, the rwd is a theme folder and has a template folder called default. In Magento, you can create as many template folders as you wish. The layout folders allow you to define the structure of the Magento pages through the XML files. The layout XML files has the power to manage the behavior of your .phtml file: you can incorporate CSS or JavaScript to be loaded on specific pages. Every page on Magento is defined by a handle. A handle is a reference name that Magento uses to refer to a particular page. For example, the <cms_page> handle is used to control the layout of the pages in your Magento. In Magento, we have two main type of handles: Default handles: These manage the whole site Non-default handles: These manage specific parts of the site In the rwd theme, the .xml files are located in app/design/frontend/rwd/default/layout. Let's take a look at an .xml layout file example: This piece of code belongs to the page.xml layout file. We can see the <default> handle defining the .css and .js files that will be loaded on the page. The page.xml file has the same name as its respective folder in app/design/frontend/rwd/default/template/page. This is an internal Magento control. Please keep this in mind: Magento works with a predefined naming file pattern. Keeping this in your mind can avoid unnecessary errors. The template folder The template folder, taking rwd as a reference, is located at app/design/frontend/rwd/default/template. Every subdirectory of template controls a specific page of Magento. The template files are the .phtml files, a mix of HTML and PHP, and they are the layout structure files. Let's take a look at a page/1column.phtml example: The locale folder The locale folder has all the specific translation of the theme. Let's imagine that you want to create a specific translation file for the rwd theme. You can create a locale file at app/design/frontend/rwd/locale/en_US/translate.csv. The locale folder structure basically has a folder of the language (en_US), and always has the translate.csv filename. The app/locale folder in Magento is the main translation folder. You can take a look at it to better understand. But the locale folder inside the theme folder has priority in Magento loading. For example, if you want to create a Brazilian version of the theme, you have to duplicate the translate.csv file from app/design/frontend/rwd/locale/en_US/ to app/design/frontend/rwd/locale/pt_BR/. This will be very useful to those who use the theme and will have to translate it in the future. Creating new entries in translate If you want to create a new entry in your translate.csv, first of all put this code in your PHTML file: <?php echo $this->__('Translate test'); ?> In CSV file, you can put the translation in this format: 'Translate test', 'Translate test'. The SKIN structure The skin folder basically has the css and js files and images of the theme, and is located in skin/frontend/rwd/default. Remember that Magento has a filename/folder naming pattern. The skin folder named rwd will work with rwd theme folder. If Magento has rwd as a main theme and is looking for an image that is not in the skin folder, Magento will search this image in skin/base folder. Remember also that Magento has a fall-back system. It is keeping its search in the main themes folder to find the correct file. Take advantage of this! CMS blocks and pages Magento has a flexible theme system. Beyond Magento code customization, the admin can create blocks and content on Magento admin panel. CMS (Content Management System) pages and blocks on Magento give you the power to embed HTML code in your page. Summary In this article, we covered the basic concepts of Magento theme. These may be used to change the display of the website or its functionality. These themes are interchangeable with Magento installations. Resources for Article: Further resources on this subject: Preparing and Configuring Your Magento Website [article] Introducing Magento Extension Development [article] Installing Magento [article]
Read more
  • 0
  • 0
  • 14663

article-image-making-app-react-and-material-design
Soham Kamani
21 Mar 2016
7 min read
Save for later

Making an App with React and Material Design

Soham Kamani
21 Mar 2016
7 min read
There has been much progression in the hybrid app development space, and also in React.js. Currently, almost all hybrid apps use cordova to build and run web applications on their platform of choice. Although learning React can be a bit of a steep curve, the benefit you get is that you are forced to make your code more modular, and this leads to huge long-term gains. This is great for developing applications for the browser, but when it comes to developing mobile apps, most web apps fall short because they fail to create the "native" experience that so many users know and love. Implementing these features on your own (through playing around with CSS and JavaScript) may work, but it's a huge pain for even something as simple as a material-design-oriented button. Fortunately, there is a library of react components to help us out with getting the look and feel of material design in our web application, which can then be ported to a mobile to get a native look and feel. This post will take you through all the steps required to build a mobile app with react and then port it to your phone using cordova. Prerequisites and dependencies Globally, you will require cordova, which can be installed by executing this line: npm install -g cordova Now that this is done, you should make a new directory for your project and set up a build environment to use es6 and jsx. Currently, webpack is the most popular build system for react, but if that's not according to your taste, there are many more build systems out there. Once you have your project folder set up, install react as well as all the other libraries you would be needing: npm init npm install --save react react-dom material-ui react-tap-event-plugin Making your app Once we're done, the app should look something like this:   If you just want to get your hands dirty, you can find the source files here. Like all web applications, your app will start with an index.html file: <html> <head> <title>My Mobile App</title> </head> <body> <div id="app-node"> </div> <script src="bundle.js" ></script> </body> </html> Yup, that's it. If you are using webpack, your CSS will be included in the bundle.js file itself, so there's no need to put "style" tags either. This is the only HTML you will need for your application. Next, let's take a look at index.js, the entry point to the application code: //index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './app.jsx'; const node = document.getElementById('app-node'); ReactDOM.render( <App/>, node ); What this does is grab the main App component and attach it to the app-node DOM node. Drilling down further, let's look at the app.jsx file: //app.jsx'use strict';import React from 'react';import AppBar from 'material-ui/lib/app-bar';import MyTabs from './my-tabs.jsx';let App = React.createClass({ render : function(){ return ( <div> <AppBar title="My App" /> <MyTabs /> </div> ); }});module.exports = App; Following react's philosophy of structuring our code, we can roughly break our app down into two parts: The title bar The tabs below The title bar is more straightforward and directly fetched from the material-ui library. All we have to do is supply a "title" property to the AppBar component. MyTabs is another component that we have made, put in a different file because of the complexity: 'use strict';import React from 'react';import Tabs from 'material-ui/lib/tabs/tabs';import Tab from 'material-ui/lib/tabs/tab';import Slider from 'material-ui/lib/slider';import Checkbox from 'material-ui/lib/checkbox';import DatePicker from 'material-ui/lib/date-picker/date-picker';import injectTapEventPlugin from 'react-tap-event-plugin';injectTapEventPlugin();const styles = { headline: { fontSize: 24, paddingTop: 16, marginBottom: 12, fontWeight: 400 }};const TabsSimple = React.createClass({ render: () => ( <Tabs> <Tab label="Item One"> <div> <h2 style={styles.headline}>Tab One Template Example</h2> <p> This is the first tab. </p> <p> This is to demonstrate how easy it is to build mobile apps with react </p> <Slider name="slider0" defaultValue={0.5}/> </div> </Tab> <Tab label="Item 2"> <div> <h2 style={styles.headline}>Tab Two Template Example</h2> <p> This is the second tab </p> <Checkbox name="checkboxName1" value="checkboxValue1" label="Installed Cordova"/> <Checkbox name="checkboxName2" value="checkboxValue2" label="Installed React"/> <Checkbox name="checkboxName3" value="checkboxValue3" label="Built the app"/> </div> </Tab> <Tab label="Item 3"> <div> <h2 style={styles.headline}>Tab Three Template Example</h2> <p> Choose a Date:</p> <DatePicker hintText="Select date"/> </div> </Tab> </Tabs> )});module.exports = TabsSimple; This file has quite a lot going on, so let’s break it down step by step: We import all the components that we're going to use in our app. This includes tabs, sliders, checkboxes, and datepickers. injectTapEventPlugin is a plugin that we need in order to get tab switching to work. We decide the style used for our tabs. Next, we make our Tabs react component, which consists of three tabs: The first tab has some text along with a slider. The second tab has a group of checkboxes. The third tab has a pop-up datepicker. Each component has a few keys, which are specific to it (such as the initial value of the slider, the value reference of the checkbox, or the placeholder for the datepicker). There are a lot more properties you can assign, which are specific to each component. Building your App For building on Android, you will first need to install the Android SDK. Now that we have all the code in place, all that is left is building the app. For this, make a new directory, start a new cordova project, and add the Android platform, by running the following on your terminal: mkdir my-cordova-project cd my-cordova-project cordova create . cordova platform add android Once the installation is complete, build the code we just wrote previously. If you are using the same build system as the source code, you will have only two files, that is, index.html and bundle.min.js. Delete all the files that are currently present in the www folder of your cordova project and copy those two files there instead. You can check whether your app is working on your computer by running cordova serve and going to the appropriate address on your browser. If all is well, you can build and deploy your app: cordova build android cordova run android This will build and install the app on your Android device (provided it is in debug mode and connected to your computer). Similarly, you can build and install the same app for iOS or windows (you may need additional tools such as XCode or .NET for iOS or Windows). You can also use any other framework to build your mobile app. The angular framework also comes with its own set of material design components. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 14657

article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 14632

article-image-developing-basic-site-nodejs-and-express
Packt
17 Feb 2016
21 min read
Save for later

Developing a Basic Site with Node.js and Express

Packt
17 Feb 2016
21 min read
In this article, we will continue with the Express framework. It's one of the most popular frameworks available and is certainly a pioneering one. Express is still widely used and several developers use it as a starting point. (For more resources related to this topic, see here.) Getting acquainted with Express Express (http://expressjs.com/) is a web application framework for Node.js. It is built on top of Connect (http://www.senchalabs.org/connect/), which means that it implements middleware architecture. In the previous chapter, when exploring Node.js, we discovered the benefit of such a design decision: the framework acts as a plugin system. Thus, we can say that Express is suitable for not only simple but also complex applications because of its architecture. We may use only some of the popular types of middleware or add a lot of features and still keep the application modular. In general, most projects in Node.js perform two functions: run a server that listens on a specific port, and process incoming requests. Express is a wrapper for these two functionalities. The following is basic code that runs the server: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); This is an example extracted from the official documentation of Node.js. As shown, we use the native module http and run a server on the port 1337. There is also a request handler function, which simply sends the Hello world string to the browser. Now, let's implement the same thing but with the Express framework, using the following code: var express = require('express'); var app = express(); app.get("/", function(req, res, next) { res.send("Hello world"); }).listen(1337); console.log('Server running at http://127.0.0.1:1337/'); It's pretty much the same thing. However, we don't need to specify the response headers or add a new line at the end of the string because the framework does it for us. In addition, we have a bunch of middleware available, which will help us process the requests easily. Express is like a toolbox. We have a lot of tools to do the boring stuff, allowing us to focus on the application's logic and content. That's what Express is built for: saving time for the developer by providing ready-to-use functionalities. Installing Express There are two ways to install Express. We'll will start with the simple one and then proceed to the more advanced technique. The simpler approach generates a template, which we may use to start writing the business logic directly. In some cases, this can save us time. From another viewpoint, if we are developing a custom application, we need to use custom settings. We can also use the boilerplate, which we get with the advanced technique; however, it may not work for us. Using package.json Express is like every other module. It has its own place in the packages register. If we want to use it, we need to add the framework in the package.json file. The ecosystem of Node.js is built on top of the Node Package Manager. It uses the JSON file to find out what we need and installs it in the current directory. So, the content of our package.json file looks like the following code: { "name": "projectname", "description": "description", "version": "0.0.1", "dependencies": { "express": "3.x" } } These are the required fields that we have to add. To be more accurate, we have to say that the mandatory fields are name and version. However, it is always good to add descriptions to our modules, particularly if we want to publish our work in the registry, where such information is extremely important. Otherwise, the other developers will not know what our library is doing. Of course, there are a bunch of other fields, such as contributors, keywords, or development dependencies, but we will stick to limited options so that we can focus on Express. Once we have our package.json file placed in the project's folder, we have to call npm install in the console. By doing so, the package manager will create a node_modules folder and will store Express and its dependencies there. At the end of the command's execution, we will see something like the following screenshot: The first line shows us the installed version, and the proceeding lines are actually modules that Express depends on. Now, we are ready to use Express. If we type require('express'), Node.js will start looking for that library inside the local node_modules directory. Since we are not using absolute paths, this is normal behavior. If we miss running the npm install command, we will be prompted with Error: Cannot find module 'express'. Using a command-line tool There is a command-line instrument called express-generator. Once we run npm install -g express-generator, we will install and use it as every other command in our terminal. If you use the framework inseveral projects, you will notice that some things are repeated. We can even copy and paste them from one application to another, and this is perfectly fine. We may even end up with our own boiler plate and can always start from there. The command-line version of Express does the same thing. It accepts few arguments and based on them, creates a skeleton for use. This can be very handy in some cases and will definitely save some time. Let's have a look at the available arguments: -h, --help: This signifies output usage information. -V, --version: This shows the version of Express. -e, --ejs: This argument adds the EJS template engine support. Normally, we need a library to deal with our templates. Writing pure HTML is not very practical. The default engine is set to JADE. -H, --hogan: This argument is Hogan-enabled (another template engine). -c, --css: If wewant to use the CSS preprocessors, this option lets us use LESS(short forLeaner CSS) or Stylus. The default is plain CSS. -f, --force: This forces Express to operate on a nonempty directory. Let's try to generate an Express application skeleton with LESS as a CSS preprocessor. We use the following line of command: express --css less myapp A new myapp folder is created with the file structure, as seen in the following screenshot: We still need to install the dependencies, so cd myapp && npm install is required. We will skip the explanation of the generated directories for now and will move to the created app.js file. It starts with initializing the module dependencies, as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var bodyParser = require('body-parser'); var routes = require('./routes/index'); var users = require('./routes/users'); var app = express(); Our framework is express, and path is a native Node.js module. The middleware are favicon, logger, cookieParser, and bodyParser. The routes and users are custom-made modules, placed in local for the project folders. Similarly, as in the Model-View-Controller(MVC) pattern, these are the controllers for our application. Immediately after, an app variable is created; this represents the Express library. We use this variable to configure our application. The script continues by setting some key-value pairs. The next code snippet defines the path to our views and the default template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); The framework uses the methods set and get to define the internal properties. In fact, we may use these methods to define our own variables. If the value is a Boolean, we can replace set and get with enable and disable. For example, see the following code: app.set('color', 'red'); app.get('color'); // red app.enable('isAvailable'); The next code adds middleware to the framework. Wecan see the code as follows: app.use(favicon()); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); app.use(cookieParser()); app.use(require('less-middleware')({ src: path.join(__dirname, 'public') })); app.use(express.static(path.join(__dirname, 'public'))); The first middleware serves as the favicon of our application. The second is responsible for the output in the console. If we remove it, we will not get information about the incoming requests to our server. The following is a simple output produced by logger: GET / 200 554ms - 170b GET /stylesheets/style.css 200 18ms - 110b The json and urlencoded middleware are related to the data sent along with the request. We need them because they convert the information in an easy-to-use format. There is also a middleware for the cookies. It populates the request object, so we later have access to the required data. The generated app uses LESS as a CSS preprocessor, and we need to configure it by setting the directory containing the .less files. Eventually, we define our static resources, which should be delivered by the server. These are just few lines, but we've configured the whole application. We may remove or replace some of the modules, and the others will continue working. The next code in the file maps two defined routes to two different handlers, as follows: app.use('/', routes); app.use('/users', users); If the user tries to open a missing page, Express still processes the request by forwarding it to the error handler, as follows: app.use(function(req, res, next) { var err = new Error('Not Found'); err.status = 404; next(err); }); The framework suggests two types of error handling:one for the development environment and another for the production server. The difference is that the second one hides the stack trace of the error, which should be visible only for the developers of the application. As we can see in the following code, we are checking the value of the env property and handling the error differently: // development error handler if (app.get('env') === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); At the end, the app.js file exports the created Express instance, as follows: module.exports = app; To run the application, we need to execute node ./bin/www. The code requires app.js and starts the server, which by default listens on port 3000. #!/usr/bin/env node var debug = require('debug')('my-application'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The process.env declaration provides an access to variables defined in the current development environment. If there is no PORT setting, Express uses 3000 as the value. The required debug module uses a similar approach to find out whether it has to show messages to the console. Managing routes The input of our application is the routes. The user visits our page at a specific URL and we have to map this URL to a specific logic. In the context of Express, this can be done easily, as follows: var controller = function(req, res, next) { res.send("response"); } app.get('/example/url', controller); We even have control over the HTTP's method, that is, we are able to catch POST, PUT, or DELETE requests. This is very handy if we want to retain the address path but apply a different logic. For example, see the following code: var getUsers = function(req, res, next) { // ... } var createUser = function(req, res, next) { // ... } app.get('/users', getUsers); app.post('/users', createUser); The path is still the same, /users, but if we make a POST request to that URL, the application will try to create a new user. Otherwise, if the method is GET, it will return a list of all the registered members. There is also a method, app.all, which we can use to handle all the method types at once. We can see this method in the following code snippet: app.all('/', serverHomePage); There is something interesting about the routing in Express. We may pass not just one but many handlers. This means that we can create a chain of functions that correspond to one URL. For example, it we need to know if the user is logged in, there is a module for that. We can add another method that validates the current user and attaches a variable to the request object, as follows: var isUserLogged = function(req, res, next) { req.userLogged = Validator.isCurrentUserLogged(); next(); } var getUser = function(req, res, next) { if(req.userLogged) { res.send("You are logged in. Hello!"); } else { res.send("Please log in first."); } } app.get('/user', isUserLogged, getUser); The Validator class is a class that checks the current user's session. The idea is simple: we add another handler, which acts as an additional middleware. After performing the necessary actions, we call the next function, which passes the flow to the next handler, getUser. Because the request and response objects are the same for all the middlewares, we have access to the userLogged variable. This is what makes Express really flexible. There are a lot of great features available, but they are optional. At the end of this chapter, we will make a simple website that implements the same logic. Handling dynamic URLs and the HTML forms The Express framework also supports dynamic URLs. Let's say we have a separate page for every user in our system. The address to those pages looks like the following code: /user/45/profile Here, 45 is the unique number of the user in our database. It's of course normal to use one route handler for this functionality. We can't really define different functions for every user. The problem can be solved by using the following syntax: var getUser = function(req, res, next) { res.send("Show user with id = " + req.params.id); } app.get('/user/:id/profile', getUser); The route is actually like a regular expression with variables inside. Later, that variable is accessible in the req.params object. We can have more than one variable. Here is a slightly more complex example: var getUser = function(req, res, next) { var userId = req.params.id; var actionToPerform = req.params.action; res.send("User (" + userId + "): " + actionToPerform) } app.get('/user/:id/profile/:action', getUser); If we open http://localhost:3000/user/451/profile/edit, we see User (451): edit as a response. This is how we can get a nice looking, SEO-friendly URL. Of course, sometimes we need to pass data via the GET or POST parameters. We may have a request like http://localhost:3000/user?action=edit. To parse it easily, we need to use the native url module, which has few helper functions to parse URLs: var getUser = function(req, res, next) { var url = require('url'); var url_parts = url.parse(req.url, true); var query = url_parts.query; res.send("User: " + query.action); } app.get('/user', getUser); Once the module parses the given URL, our GET parameters are stored in the .query object. The POST variables are a bit different. We need a new middleware to handle that. Thankfully, Express has one, which is as follows: app.use(express.bodyParser()); var getUser = function(req, res, next) { res.send("User: " + req.body.action); } app.post('/user', getUser); The express.bodyParser() middleware populates the req.body object with the POST data. Of course, we have to change the HTTP method from .get to .post or .all. If we want to read cookies in Express, we may use the cookieParser middleware. Similar to the body parser, it should also be installed and added to the package.json file. The following example sets the middleware and demonstrates its usage: var cookieParser = require('cookie-parser'); app.use(cookieParser('optional secret string')); app.get('/', function(req, res, next){ var prop = req.cookies.propName }); Returning a response Our server accepts requests, does some stuff, and finally, sends the response to the client's browser. This can be HTML, JSON, XML, or binary data, among others. As we know, by default, every middleware in Express accepts two objects, request and response. The response object has methods that we can use to send an answer to the client. Every response should have a proper content type or length. Express simplifies the process by providing functions to set HTTP headers and sending content to the browser. In most cases, we will use the .send method, as follows: res.send("simple text"); When we pass a string, the framework sets the Content-Type header to text/html. It's great to know that if we pass an object or array, the content type is application/json. If we develop an API, the response status code is probably going to be important for us. With Express, we are able to set it like in the following code snippet: res.send(404, 'Sorry, we cannot find that!'); It's even possible to respond with a file from our hard disk. If we don't use the framework, we will need to read the file, set the correct HTTP headers, and send the content. However, Express offers the .sendfile method, which wraps all these operations as follows: res.sendfile(__dirname + "/images/photo.jpg"); Again, the content type is set automatically; this time it is based on the filename's extension. When building websites or applications with a user interface, we normally need to serve an HTML. Sure, we can write it manually in JavaScript, but it's good practice to use a template engine. This means we save everything in external files and the engine reads the markup from there. It populates them with some data and, at the end, provides ready-to-show content. In Express, the whole process is summarized in one method, .render. However, to work properly, we have to instruct the framework regarding which template engine to use. We already talked about this in the beginning of this chapter. The following two lines of code, set the path to our views and the template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); Let's say we have the following template ( /views/index.jade ): h1= title p Welcome to #{title} Express provides a method to serve templates. It accepts the path to the template, the data to be applied, and a callback. To render the previous template, we should use the following code: res.render("index", {title: "Page title here"}); The HTML produced looks as follows: <h1>Page title here</h1><p>Welcome to Page title here</p> If we pass a third parameter, function, we will have access to the generated HTML. However, it will not be sent as a response to the browser. The example-logging system We've seen the main features of Express. Now let's build something real. The next few pages present a simple website where users can read only if they are logged in. Let's start and set up the application. We are going to use Express' command-line instrument. It should be installed using npm install -g express-generator. We create a new folder for the example, navigate to it via the terminal, and execute express --css less site. A new directory, site, will be created. If we go there and run npm install, Express will download all the required dependencies. As we saw earlier, by default, we have two routes and two controllers. To simplify the example, we will use only the first one: app.use('/', routes). Let's change the views/index.jade file content to the following HTML code: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr p That's a simple application using Express. Now, if we run node ./bin/www and open http://127.0.0.1:3000, we will see the page. Jade uses indentation to parse our template. So, we should not mix tabs and spaces. Otherwise, we will get an error. Next, we need to protect our content. We check whether the current user has a session created; if not, a login form is shown. It's the perfect time to create a new middleware. To use sessions in Express, install an additional module: express-session. We need to open our package.json file and add the following line of code: "express-session": "~1.0.0" Once we do that, a quick run of npm install will bring the module to our application. All we have to do is use it. The following code goes to app.js: var session = require('express-session'); app.use(session({ secret: 'app', cookie: { maxAge: 60000 }})); var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { res.send("show login form"); } } app.use('/', verifyUser, routes); Note that we changed the original app.use('/', routes) line. The session middleware is initialized and added to Express. The verifyUser function is called before the page rendering. It uses the req.session object, and checks whether there is a loggedIn variable defined and if its value is true. If we run the script again, we will see that the show login form text is shown for every request. It's like this because no code sets the session exactly the way we want it. We need a form where users can type their username and password. We will process the result of the form and if the credentials are correct, the loggedIn variable will be set to true. Let's create a new Jade template, /views/login.jade: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr form(method='post') label Username: br input(type='text', name='username') br label Password: br input(type='password', name='password') br input(type='submit') Instead of sending just a text with res.send("show login form"); we should render the new template, as follows: res.render("login", {title: "Please log in."}); We choose POST as the method for the form. So, we need to add the middleware that populates the req.body object with the user's data, as follows: app.use(bodyParser()); Process the submitted username and password as follows: var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { var username = "admin", password = "admin"; if(req.body.username === username && req.body.password === password) { req.session.loggedIn = true; res.redirect('/'); } else { res.render("login", {title: "Please log in."}); } } } The valid credentials are set to admin/admin. In a real application, we may need to access a database or get this information from another place. It's not really a good idea to place the username and password in the code; however, for our little experiment, it is fine. The previous code checks whether the passed data matches our predefined values. If everything is correct, it sets the session, after which the user is forwarded to the home page. Once you log in, you should be able to log out. Let's add a link for that just after the content on the index page (views/index.jade ): a(href='/logout') logout Once users clicks on this link, they will be forward to a new page. We just need to create a handler for the new route, remove the session, and forward them to the index page where the login form is reflected. Here is what our logging out handler looks like: // in app.js var logout = function(req, res, next) { req.session.loggedIn = false; res.redirect('/'); } app.all('/logout', logout); Setting loggedIn to false is enough to make the session invalid. The redirect sends users to the same content page they came from. However, this time, the content is hidden and the login form pops up. Summary In this article, we learned about one of most widely used Node.js frameworks, Express. We discussed its fundamentals, how to set it up, and its main characteristics. The middleware architecture, which we mentioned in the previous chapter, is the base of the library and gives us the power to write complex but, at the same time, flexible applications. The example we used was a simple one. We required a valid session to provide page access. However, it illustrates the usage of the body parser middleware and the process of registering the new routes. We also updated the Jade templates and saw the results in the browser. For more information on Node.js Refer to the following URLs: https://www.packtpub.com/web-development/instant-nodejs-starter-instant https://www.packtpub.com/web-development/learning-nodejs-net-developers https://www.packtpub.com/web-development/nodejs-essentials Resources for Article: Further resources on this subject: Writing a Blog Application with Node.js and AngularJS [article] Testing in Node and Hapi [article] Learning Node.js for Mobile Application Development [article]
Read more
  • 0
  • 0
  • 14618

article-image-asking-if-developers-can-be-great-entrepreneurs-is-like-asking-if-moms-can-excel-at-work-and-motherhood
Aarthi Kumaraswamy
05 Jun 2018
13 min read
Save for later

Asking if good developers can be great entrepreneurs is like asking if moms can excel at work and motherhood

Aarthi Kumaraswamy
05 Jun 2018
13 min read
You heard me right. Asking, ‘can developers succeed at entrepreneurship?’ is like asking ‘if women should choose between work and having children’. ‘Can you have a successful career and be a good mom?’ is a question that many well-meaning acquaintances still ask me. You see I am a new mom, (not a very new one, my son is 2.5 years old now). I’m also the managing editor of this site since its inception last year when I rejoined work after a maternity break. In some ways, the Packt Hub feels like running a startup too. :) Now how did we even get to this question? It all started with the results of this year's skill up survey. Every year we conduct a survey amongst developers to know the pulse of the industry and to understand their needs, aspirations, and apprehensions better so that we can help them better to skill up. One of the questions we asked them this year was: ‘Where do you see yourself in the next 5 years?’ To this, an overwhelming one third responded stating they want to start a business. Surveys conducted by leading consultancies, time and again, show that only 2 or 3 in 10 startups survive. Naturally, a question that kept cropping up in our editorial discussions after going through the above response was: Can developers also succeed as entrepreneurs? To answer this, first let’s go back to the question: Can you have a successful career and be a good mom? The short answer is, Yes, it is possible to be both, but it will be hard work. The long answer is, This path is not for everyone. As a working mom, you need to work twice as hard, befriend uncertainty and nurture a steady support system that you trust both at work and home to truly flourish. At times, when you see your peers move ahead in their careers or watch stay-at-home moms with their kids, you might feel envy, inadequacy or other unsavory emotions in between. You need superhuman levels of mental, emotional and physical stamina to be the best version of yourself to move past such times with grace, and to truly appreciate the big picture: you have a job you love and a kid who loves you. But what has my experience as a working mom got to do anything with developers who want to start their own business? You’d be surprised to see the similarities. Starting a business is, in many ways, like having a baby. There is a long incubation period, then the painful launch into the world followed by sleepless nights of watching over the business so it doesn’t choke on itself while you were catching a nap. And you are doing this for the first time too, even if you have people giving you advice. Then there are those sustenance issues to look at and getting the right people in place to help the startup learn to turn over, sit up, stand up, crawl and then finally run and jump around till it makes you go mad with joy and apprehension at the same time thinking about what’s next in store now. How do I scale my business? Does my business even need me? To me, being a successful developer, writer, editor or any other profession, for that matter, is about being good at what you do (write code, write stories, spot raw talent, and bring out the best in others etc) and enjoying doing it immensely. It requires discipline, skill, expertise, and will. To see if the similarities continue, let’s try rewriting my statement for a developer turned entrepreneur. Can you be a good developer and a great entrepreneur? This path is not for everyone. As a developer-turned-entrepreneur, you need to work twice as hard as your friends who have a full-time job, to make it through the day wearing many hats and putting out workplace fires that have got nothing to do with your product development. You need a steady support system both at work and home to truly flourish. At times, when you see others move ahead in their careers or listen to entrepreneurs who have sold their business to larger companies or just got VC funded, you might feel envy, selfishness, inadequacy or any other emotion in between. You need superhuman levels of maturity to move past such times, and to truly appreciate the big picture: you built something incredible and now you are changing the world, even if it is just one customer at a time with your business. Now that we sail on the same boat, here are the 5 things I learned over the last year as a working mom that I hope you as a developer-entrepreneur will find useful. I’d love to hear your take on them in the comments below. [divider style="normal" top="20" bottom="20"] #1. Become a productivity master. Compartmentalize your roles and responsibilities into chunks spread across the day. Ruthlessly edit out clutter from your life. [divider style="normal" top="20" bottom="20"] Your life changed forever when your child (business) was born. What worked for you as a developer may not work as an entrepreneur. The sooner you accept this, the faster you will see the differences and adapt accordingly. For example, I used to work quite late into the night and wake up late in the morning before my son was born. I also used to binge watch TV shows during weekends. Now I wake up as early as 4.30 AM so that I can finish off the household chores for the day and get my son ready for play school by 9 AM. I don’t think about work at all at this time. I must accommodate my son during lunch break and have a super short lunch myself. But apart from that I am solely focused on the work at hand during office hours and don’t think about anything else. My day ends at 11 PM. Now I am more choosy about what I watch as I have very less time for leisure. I instead prefer to ‘do’ things like learning something new, spending time bonding with my son, or even catching up on sleep, on weekends. This spring-cleaning and compartmentalization took a while to get into habit, but it is worth the effort. Truth be told, I still occasionally fallback to binging, like I am doing right now with this article, writing it at 12 AM on a Saturday morning because I’m in a flow. As a developer-entrepreneur, you might be tempted to spend most of your day doing what you love, i.e., developing/creating something because it is a familiar territory and you excel at that. But doing so at the cost of your business will cost you your business, sooner than you think. Resist the urge and instead, organize your day and week such that you spend adequate time on key aspects of your business including product development. Make a note of everything you do for a few days and then decide what’s not worth doing and what else can you do instead in its place. Have room only for things that matter to you which enrich your business goals and quality of life. [divider style="normal" top="20" bottom="20"] #2. Don’t aim to create the best, aim for good enough. Ship the minimum viable product (MVP). [divider style="normal" top="20" bottom="20"] All new parents want the best for their kids from the diaper they use to the food they eat and the toys they play with. They can get carried away buying stuff that they think their child needs only to have a storeroom full of unused baby items. What I’ve realized is that babies actually need very less stuff. They just need basic food, regular baths, clean diapers (you could even get away without those if you are up for the challenge!), sleep, play and lots of love (read mommy and daddy time, not gifts). This is also true for developers who’ve just started up. They want to build a unique product that the world has never seen before and they want to ship the perfect version with great features and excellent customer reviews. But the truth is, your first product is, in fact, a prototype built on your assumption of what your customer wants. For all you know, you may be wrong about not just your customer’s needs but even who your customer is. This is why a proper market study is key to product development. Even then, the goal should be to identify the key features that will make your product viable. Ship your MVP, seek quick feedback, iterate and build a better product in the next version. This way you haven’t unnecessarily sunk upfront costs, you’re ahead of your competitors and are better at estimating customer needs as well. Simplicity is the highest form of sophistication. You need to find just the right elements to keep in your product and your business model. As Michelangelo would put it, “chip away the rest”. [divider style="normal" top="20" bottom="20"] #3. There is no shortcut to success. Don’t compromise on quality or your values. [divider style="normal" top="20" bottom="20"] The advice to ship a good enough product is not a permission to cut corners. Since their very first day, babies watch and learn from us. They are keen observers and great mimics. They will feel, talk and do what we feel, say and do, even if they don’t understand any of the words or gestures. I think they do understand emotions. One more reason for us to be better role models for our children. The same is true in a startup. The culture mimics the leader and it quickly sets in across the organization. As a developer, you may have worked long hours, even into the night to get that piece of code working and felt a huge rush from the accomplishment. But as an entrepreneur remember that you are being watched and emulated by those who work for you. You are what they want to be when they ‘grow up’. Do you want a crash and burn culture at your startup? This is why it is crucial that you clarify your business’s goals, purpose, and values to everyone in your company, even if it just has one other person working there. It is even more important that you practice what you preach. Success is an outcome of right actions taken via the right habits cultivated. [divider style="normal" top="20" bottom="20"] #4. You can’t do everything yourself and you can’t be everywhere. You need to trust others to do their job and appreciate them for a job well done. This also means you hire the right people first. [divider style="normal" top="20" bottom="20"] It takes a village to raise a child, they say. And it is very true, especially in families where both parents work. It would’ve been impossible for me to give my 100% at work, if I had to keep checking in on how my son is doing with his grandparents or if I refused the support my husband offers sharing household chores. Just because you are an exceptional developer, you can’t keep micromanaging product development at your startup. As a founder, you have a larger duty towards your entire business and customers. While it is good to check how your product is developing, your pure developer days are behind. Find people better than you at this job and surround yourself with people who are good at what they do, and share your values for key functions of your startup. Products succeed because of developers, businesses succeed because their leader knew when to get out of the way and when to step in. [divider style="normal" top="20" bottom="20"] #5. Customer first, product next, ego last. [divider style="normal" top="20" bottom="20"] This has been the toughest lesson so far. It looks deceptively simple in theory but hard to practice in everyday life. As developers and engineers, we are primed to find solutions. We also see things in binaries: success and failure, right and wrong, good and bad. This is a great quality to possess to build original products. However, it is also the Achilles heel for the developer turned entrepreneur. In business, things are seldom in black and white. Putting product first can be detrimental to knowing your customers. For example, you build a great product, but the customer is not using it as you intended it to be used. Do you train your customers to correct their ways or do you unearth the underlying triggers that contribute to that customer behavior and alter your product’s vision? The answer is, ‘it depends’. And your job as an entrepreneur is to enable your team to find the factors that influence your decision on the subject; it is not to find the answer yourself or make a decision based on your experience. You need to listen more than you talk to truly put your customer first. To do that you need to acknowledge that you don’t know everything. Put your ego last. Make it easy for customers to share their views with you directly. Acknowledge that your product/service exists to serve your customers’ needs. It needs to evolve with your customer. Find yourself a good mentor or two who you respect and want to emulate. They will be your sounding board and the light at the end of the tunnel during your darkest hours (you will have many of those, I guarantee). Be grateful for your support network of family, friends, and colleagues and make sure to let them know how much you appreciate them for playing their part in your success. If you have the right partner to start the business, jump in with both feet. Most tech startups have a higher likelihood of succeeding when they have more than one founder. Probably because the demands of tech and business are better balanced on more than one pair of shoulders. [dropcap]H[/dropcap]ow we frame our questions is a big part of the problem. Reframing them makes us see different perspectives thereby changing our mindsets. Instead of asking ‘can working moms have it all?’, what if we asked ‘what can organizations do to help working moms achieve work-life balance?’, ‘how do women cope with competing demands at work and home?’ Instead of asking ‘can developers be great entrepreneurs?’ better questions to ask are ‘what can developers do to start a successful business?’, ‘what can we learn from those who have built successful companies?’ Keep an open mind; the best ideas may come from the unlikeliest sources! 1 in 3 developers wants to be an entrepreneur. What does it take to make a successful transition? Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 14615
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-grid-system-susy
Packt
09 Aug 2016
14 min read
Save for later

Building a Grid System with Susy

Packt
09 Aug 2016
14 min read
In this article by Luke Watts, author of the book Mastering Sass, we will build a responsive grid system using the Susy library and a few custom mixins and functions. We will set a configuration map with our breakpoints which we will then loop over to automatically create our entire grid, using interpolation to create our class names. (For more resources related to this topic, see here.) Detailing the project requirements For this example, we will need bower to download Susy. After Susy has been downloaded we will only need two files. We'll place them all in the same directory for simplicity. These files will be style.scss and _helpers.scss. We'll place the majority of our SCSS code in style.scss. First, we'll import susy and our _helpers.scss at the beginning of this file. After that we will place our variables and finally our code which will create our grid system. Bower and Susy To check if you have bower installed open your command line (Terminal on Unix or CMD on Windows) and run: bower -v If you see a number like "1.7.9" you have bower. If not you will need to install bower using npm, a package manager for NodeJS. If you don't already have NodeJS installed, you can download it from: https://nodejs.org/en/. To install bower from your command line using npm you will need to run: npm install -g bower Once bower is installed cd into the root of your project and run: bower install susy This will create a directory called bower_components. Inside that you will find a folder called susy. The full path to file we will be importing in style.scss is bower_components/susy/sass/_susy.scss. However we can leave off the underscore (_) and also the extension (.scss). Sass will still load import the file just fine. In style.scss add the following at the beginning of our file: // style.scss @import 'bower_components/susy/sass/susy'; Helpers (mixins and functions) Next, we'll need to import our _helpers.scss file in style.scss. Our _helpers.scss file will contain any custom mixins or functions we'll create to help us in building our grid. In style.scss import _helpers.scss just below where we imported Susy: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; Mixin: bp (breakpoint) I don't know about you, but writing media queries always seems like bit of a chore to me. I just don't like to write (min-width: 768px) all the time. So for that reason I'm going to include the bp mixin, which means instead of writing: @media(min-width: 768px) { // ... } We can simply use: @include bp(md) { // ... } First we are going to create a map of our breakpoints. Add the $breakpoints map to style.scss just below our imports: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); Then, inside _helpers.scss we're going to create our bp mixin which will handle creating our media queries from the $breakpoints map. Here's the breakpoint (bp) mixin: @mixin bp($size: md) { @media (min-width: map-get($breakpoints, $size)) { @content; } } Here we are setting the default breakpoint to be md (768px). We then use the built in Sass function map-get to get the relevant value using the key ($size). Inside our @media rule we use the @content directive which will allows us pass any Sass or CSS directly into our bp mixin to our @media rule. The container mixin The container mixin sets the max-width of the containing element, which will be the .container element for now. However, it is best to use the container mixin to semantically restrict certain parts of the design to your max width instead of using presentational classes like container or row. The container mixin takes a width argument, which will be the max-width. It also automatically applies the micro-clearfix hack. This prevents the containers height from collapsing when the elements inside it are floated. I prefer the overflow: hidden method myself, but they do the same thing essentially. By default, the container will be set to max-width: 100%. However, you can set it to be any valid unit of dimension, such as 60em, 1160px, 50%, 90vw, or whatever. As long as it's a valid CSS unit it will work. In style.scss let's create our .container element using the container mixin: // style.scss .container { @include container(1160px); } The preceding code will give the following CSS output: .container { max-width: 1160px; margin-left: auto; margin-right: auto; } .container:after { content: " "; display: block; clear: both; } Due to the fact the container uses a max-width we don't need to specify different dimensions for various screen sizes. It will be 100% until the screen is above 1160px and then the max-width value will kick in. The .container:after rule is the micro-clearfix hack. The span mixin To create columns in Susy we use the span mixin. The span mixin sets the width of that element and applies a padding or margin depending on how Susy is set up. By default, Susy will apply a margin to the right of each column, but you can set it to be on the left, or to be padding on the left or right or padding or margin on both sides. Susy will do the necessary work to make everything work behind the scenes. To create a half width column in a 12 column grid you would use: .col-6 { @include span(6 of 12); } The of 12 let's Susy know this is a 12 column grid. When we define our $susy map later we can tell Susy how many columns we are using via the columns property. This means we can drop the of 12 part and simply use span(6) instead. Susy will then know we are using 12 columns unless we explicitly pass another value. The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } Notice the width and margin together would actually be 50.84746%, not 50% as you might expect. Therefor two of these column would actually be 101.69492%. That will cause the last column to wrap to the next row. To prevent this, you would need to remove the margin from the last column. The last keyword To address this, Susy uses the last keyword. When you pass this to the span mixin it lets Susy know this is the last column in a row. This removes the margin right and also floats the element in question to the right to ensure it's at the very end of the row. Let's take the previous example where we would have two col-6 elements. We could create a class of col-6-last and apply the last keyword to that span mixin: .col-6 { @include span(6 of 12); &-last { @include span(last 6 of 12) } } The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } .col-6-last { width: 49.15254%; float: right; margin-right: 0; } You can also place the last keyword at the end. This will also work: .col-6 { @include span(6 of 12); &-last { @include span(6 of 12 last) } } The $susy configuration map Susy allows for a lot of configuration through its configuration map which is defined as $susy. The settings in the $susy map allow us to set how wide the container should be, how many columns our grid should have, how wide the gutters are, whether those gutters should be margins or padding, and whether the gutters should be on the left, right or both sides of each column. Actually, there are even more settings available depending what type of grid you'd like to build. Let's, define our $susy map with the container set to 1160px just after our $breakpoints map: // style.scss $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); Here we've set our containers max-width to be 1160px. This is used when we use the container mixin without entering a value. We've also set our grid to be 12 columns with the gutters, (padding or margin) to be 1/3 the width of a column. That's about all we need to set for our purposes, however, Susy has a lot more to offer. In fact, to cover everything in Susy would need an entirely book of its own. If you want to explore more of what Susy can do you should read the documentation at http://susydocs.oddbird.net/en/latest/. Setting up a grid system We've all used a 12 column grid which has various sizes (small, medium, large) or a set breakpoint (or breakpoints). These are the most popular methods for two reasons...it works, and it's easy to understand. Furthermore, with the help of Susy we can achieve this with less than 30 lines of Sass! Don't believe me? Let's begin. The concept of our grid system Our grid system will be similar to that of Foundation and Bootstrap. It will have 3 breakpoints and will be mobile-first. It will have a container, which will act as both .container and .row, therefore removing the need for a .row class. The breakpoints Earlier we defined three sizes in our $breakpoints map. These were: $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); So our grid will have small, medium and large breakpoints. The columns naming convention Our columns will use a similar naming convention to that of Bootstrap. There will be four available sets of columns. The first will start from 0px up to the 399px (example: .col-12) The next will start from 480px up to 767px (example: .col-12-sm) The medium will start from 768px up to 979px (example: .col-12-md) The large will start from 980px (example: .col-12-lg) Having four options will give us the most flexibility. Building the grid From here we can use an @for loop and our bp mixin to create our four sets of classes. Each will go from 1 through 12 (or whatever our Susy columns property is set to) and will use the breakpoints we defined for small (sm), medium (md) and large (lg). In style.scss add the following: // style.scss @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } These 9 lines of code are responsible for our mobile-first set of column classes. This loops from one through 12 (which is currently the value of the $susy columns property) and creates a class for each. It also adds a class which handles removing the final columns right margin so our last column doesn't wrap onto a new line. Having control of when this happens will give us the most control. The preceding code would create: .col-1 { width: 6.38298%; float: left; margin-right: 2.12766%; } .col-1-last { width: 6.38298%; float: right; margin-right: 0; } /* 2, 3, 4, and so on up to col-12 */ That means our loop which is only 9 lines of Sass will generate 144 lines of CSS! Now let's create our 3 breakpoints. We'll use an @each loop to get the sizes from our $breakpoints map. This will mean if we add another breakpoint, such as extra-large (xl) it will automatically create the correct set of classes for that size. @each $size, $value in $breakpoints { // Breakpoint will go here and will use $size } Here we're looping over the $breakpoints map and setting a $size variable and a $value variable. The $value variable will not be used, however the $size variable will be set to small, medium and large for each respective loop. We can then use that to set our bp mixin accordingly: @each $size, $value in $breakpoints { @include bp($size) { // The @for loop will go here similar to the above @for loop... } } Now, each loop will set a breakpoint for small, medium and large, and any additional sizes we might add in the future will be generated automatically. Now we can use the same @for loop inside the bp mixin with one small change, we'll add a size to the class name: @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } That's everything we need for our grid system. Here's the full stye.scss file: / /style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); .container { @include container; } @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } With our bp mixin that's 45 lines of SCSS. And how many lines of CSS does that generate? Nearly 600 lines of CSS! Also, like I've said, if we wanted to create another breakpoint it would only require a change to the $breakpoint map. Then, if we wanted to have 16 columns instead we would only need to the $susy columns property. The above code would then automatically loop over each and create the correct amount of columns for each breakpoint. Testing our grid Next we need to check our grid works. We mainly want to check a few column sizes for each breakpoint and we want to be sure our last keyword is doing what we expect. I've created a simple piece of HTML to do this. I've also add a small bit of CSS to the file to correct box-sizing issues which will happen because of the additional 1px border. I've also restricted the height so text which wraps to a second line won't affect the heights. This is simply so everything remains in line so it's easy to see our widths are working. I don't recommend setting heights on elements. EVER. Instead using padding or line-height if you can to give an element more height and let the content dictate the size of the element. Create a file called index.html in the root of the project and inside add the following: <!doctype html> <html lang="en-GB"> <head> <meta charset="UTF-8"> <title>Susy Grid Test</title> <link rel="stylesheet" type="text/css" href="style.css" /> <style type="text/css"> *, *::before, *::after { box-sizing: border-box; } [class^="col"] { height: 1.5em; background-color: grey; border: 1px solid black; } </style> </head> <body> <div class="container"> <h1>Grid</h1> <div class="col-12 col-10-sm col-2-md col-10-lg">.col-sm-10.col-2-md.col-10-lg</div> <div class="col-12 col-2-sm-last col-10-md-last col-2-lg-last">.col-sm-2-last.col-10-md-last.col-2-lg-last</div> <div class="col-12 col-9-sm col-3-md col-9-lg">.col-sm-9.col-3-md.col-9-lg</div> <div class="col-12 col-3-sm-last col-9-md-last col-3-lg-last">.col-sm-3-last.col-9-md-last.col-3-lg-last</div> <div class="col-12 col-8-sm col-4-md col-8-lg">.col-sm-8.col-4-md.col-8-lg</div> <div class="col-12 col-4-sm-last col-8-md-last col-4-lg-last">.col-sm-4-last.col-8-md-last.col-4-lg-last</div> <div class="col-12 col-7-sm col-md-5 col-7-lg">.col-sm-7.col-md-5.col-7-lg</div> <div class="col-12 col-5-sm-last col-7-md-last col-5-lg-last">.col-sm-5-last.col-7-md-last.col-5-lg-last</div> <div class="col-12 col-6-sm col-6-md col-6-lg">.col-sm-6.col-6-md.col-6-lg</div> <div class="col-12 col-6-sm-last col-6-md-last col-6-lg-last">.col-sm-6-last.col-6-md-last.col-6-lg-last</div> </div> </body> </html> Use your dev tools responsive tools or simply resize the browser from full size down to around 320px and you'll see our grid works as expected. Summary In this article we used Susy grids as well as a simple breakpoint mixin (bp) to create a solid, flexible grid system. With just under 50 lines of Sass we generated our grid system which consists of almost 600 lines of CSS.  Resources for Article: Further resources on this subject: Implementation of SASS [article] Use of Stylesheets for Report Designing using BIRT [article] CSS Grids for RWD [article]
Read more
  • 0
  • 0
  • 14613

article-image-configuring-manage-out-directaccess-clients
Packt
14 Oct 2013
14 min read
Save for later

Configuring Manage Out to DirectAccess Clients

Packt
14 Oct 2013
14 min read
In this article by Jordan Krause, the author of the book Microsoft DirectAccess Best Practices and Troubleshooting, we will have a look at how Manage Out is configured to DirectAccess clients. DirectAccess is obviously a wonderful technology from the user's perspective. There is literally nothing that they have to do to connect to company resources; it just happens automatically whenever they have Internet access. What isn't talked about nearly as often is the fact that DirectAccess is possibly of even greater benefit to the IT department. Because DirectAccess is so seamless and automatic, your Group Policy settings, patches, scripts, and everything that you want to use to manage and manipulate those client machines is always able to run. You no longer have to wait for the user to launch a VPN or come into the office for their computer to be secured with the latest policies. You no longer have to worry about laptops being off the network for weeks at a time, and coming back into the network after having been connected to dozens of public hotspots while someone was on a vacation with it. While many of these management functions work right out of the box with a standard DirectAccess configuration, there are some functions that will need a couple of extra steps to get them working properly. That is our topic of discussion for this article. We are going to cover the following topics: Pulls versus pushes What does Manage Out have to do with IPv6 Creating a selective ISATAP environment Setting up client-side firewall rules RDP to a DirectAccess client No ISATAP with multisite DirectAccess (For more resources related to this topic, see here.) Pulls versus pushes Often when thinking about management functions, we think of them as the software or settings that are being pushed out to the client computers. This is actually not true in many cases. A lot of management tools are initiated on the client side, and so their method of distributing these settings and patches are actually client pulls. A pull is a request that has been initiated by the client, and in this case, the server is simply responding to that request. In the DirectAccess world, this kind of request is handled very differently than an actual push, which would be any case where the internal server or resource is creating the initial outbound communication with the client, a true outbound initiation of packets. Pulls typically work just fine over DirectAccess right away. For example, Group Policy processing is initiated by the client. When a laptop decides that it's time for a Group Policy refresh, it reaches out to Active Directory and says "Hey AD, give me my latest stuff". The Domain Controllers then simply reply to that request, and the settings are pulled down successfully. This works all day, every day over DirectAccess. Pushes, on the other hand, require some special considerations. This scenario is what we commonly refer to as DirectAccess Manage Out, and this does not work by default in a stock DirectAccess implementation. What does Manage Out have to do with IPv6? IPv6 is essentially the reason why Manage Out does not work until we make some additions to your network. "But DirectAccess in Server 2012 handles all of my IPv6 to IPv4 translations so that my internal network can be all IPv4, right?" The answer to that question is yes, but those translations only work in one direction. For example, in our previous Group Policy processing scenario, the client computer calls out for Active Directory, and those packets traverse the DirectAccess tunnels using IPv6. When the packets get to the DirectAccess server, it sees that the Domain Controller is IPv4, so it translates those IPv6 packets into IPv4 and sends them on their merry way. Domain Controller receives the said IPv4 packets, and responds in kind back to the DirectAccess server. Since there is now an active thread and translation running on the DirectAccess server for that communication, it knows that it has to take those IPv4 packets coming back (as a response) from the Domain Controller and spin them back into IPv6 before sending across the IPsec tunnels back to the client. This all works wonderfully and there is absolutely no configuration that you need to do to accomplish this behavior. However, what if you wanted to send packets out to a DirectAccess client computer from inside the network? One of the best examples of this is a Helpdesk computer trying to RDP into a DirectAccess-connected computer. To accomplish this, the Helpdesk computer needs to have IPv6 routability to the DirectAccess client computer. Let's walk through the flow of packets to see why this is necessary. First of all, if you have been using DirectAccess for any duration of time, you might have realized by now that the client computers register themselves in DNS with their DirectAccess IP addresses when they connect. This is a normal behavior, but what may not look "normal" to you is that those records they are registering are AAAA (IPv6) records. Remember, all DirectAccess traffic across the internet is IPv6, using one of these three transition technologies to carry the packets: 6to4, Teredo, or IP-HTTPS. Therefore, when the clients connect, whatever transition tunnel is established has an IPv6 address on the adapter (you can see it inside ipconfig /all on the client), and those addresses will register themselves in DNS, assuming your DNS is configured to allow it. When the Helpdesk personnel types CLIENT1 in their RDP client software and clicks on connect, it is going to reach out to DNS and ask, "What is the IP address of CLIENT1?" One of two things is going to happen. If that Helpdesk computer is connected to an IPv4-only network, it is obviously only capable of transmitting IPv4 packets, and DNS will hand them the DirectAccess client computer's A (IPv4) record, from the last time the client was connected inside the office. Routing will fail, of course, because CLIENT1 is no longer sitting on the physical network. The following screenshot is an example of pinging a DirectAccess connected client computer from an IPv4 network: How to resolve this behavior? We need to give that Helpdesk computer some form of IPv6 connectivity on the network. If you have a real, native IPv6 network running already, you can simply tap into it. Each of your internal machines that need this outbound routing, as well as the DirectAccess server or servers, all need to be connected to this network for it to work. However, I find out in the field that almost nobody is running any kind of IPv6 on their internal networks, and they really aren't interested in starting now. This is where the Intra-Site Automatic Tunnel Addressing Protocol, more commonly referred to as ISATAP, comes into play. You can think of ISATAP as a virtual IPv6 cloud that runs on top of your existing IPv4 network. It enables computers inside your network, like that Helpdesk machine, to be able to establish an IPv6 connection with an ISATAP router. When this happens, that Helpdesk machine will get a new network adapter, visible via ipconfig / all, named ISATAP, and it will have an IPv6 address. Yes, this does mean that the Helpdesk computer, or any machine that needs outbound communications to the DirectAccess clients, has to be capable of talking IPv6, so this typically means that those machines must be Windows 7, Windows 8, Server 2008, or Server 2012. What if your switches and routers are not capable of IPv6? No problem. Similar to the way that 6to4, Teredo, and IP-HTTPS take IPv6 packets and wrap them inside IPv4 so they can make their way across the IPv4 internet, ISATAP also takes IPv6 packets and encapsulates them inside IPv4 before sending them across the physical network. This means that you can establish this ISATAP IPv6 network on top of your IPv4 network, without needing to make any infrastructure changes at all. So, now I need to go purchase an ISATAP router to make this happen? No, this is the best part. Your DirectAccess server is already an ISATAP router; you simply need to point those internal machines at it. Creating a selective ISATAP environment All of the Windows operating systems over the past few years have ISATAP client functionality built right in. This has been the case since Vista, I believe, but I have yet to encounter anyone using Vista in a corporate environment, so for the sake of our discussion, we are generally talking about Windows 7, Windows 8, Server 2008, and Server 2012. For any of these operating systems, out of the box all you have to do is give it somewhere to resolve the name ISATAP, and it will go ahead and set itself up with a connection to that ISATAP router. So, if you wanted to immediately enable all of your internal machines that were ISATAP capable to suddenly be ISATAP connected, all you would have to do is create a single host record in DNS named ISATAP and point it at the internal IP address of your DirectAccess server. To get that to work properly, you would also have to tweak DNS so that ISATAP was no longer part of the global query block list, but I'm not even going to detail that process because my emphasis here is that you should not set up your environment this way. Unfortunately, some of the step-by-step guides that are available on the web for setting up DirectAccess include this step. Even more unfortunately, if you have ever worked with UAG DirectAccess, you'll remember on the IP address configuration screen that the GUI actually told you to go ahead and set ISATAP up this way. Please do not create a DNS host record named ISATAP! If you have already done so, please consider this article to be a guide on your way out of danger. The primary reason why you should stay away from doing this is because Windows prefers IPv6 over IPv4. Once a computer is setup with connection to an ISATAP router, it receives an IPv6 address which registers itself in DNS, and from that point onward whenever two ISATAP machines communicate with each other, they are using IPv6 over the ISATAP tunnel. This is potentially problematic for a couple of reasons. First, all ISATAP traffic default routes through the ISATAP router for which it is configured, so your DirectAccess server is now essentially the default gateway for all of these internal computers. This can cause performance problems and even network flooding. The second reason is that because these packets are now IPv6, even though they are wrapped up inside IPv4, the tools you have inside the network that you might be using to monitor internal traffic are not going to be able to see this traffic, at least not in the same capacity as it would do for normal IPv4 traffic. It is in your best interests that you do not follow this global approach for implementing ISATAP, and instead take the slightly longer road and create what I call a "Selective ISATAP environment", where you have complete control over which machines are connected to the ISATAP network, and which ones are not. Many DirectAccess installs don't require ISATAP at all. Remember, this is only used for those instances where you need true outbound reach to the DirectAccess clients. I recommend installing DirectAccess without ISATAP first, and test all of your management tools. If they work without ISATAP, great! If they don't, then you can create your selective ISATAP environment. To set ourselves up for success, we need to create a simple Active Directory security group, and a GPO. The combination of these things is going to allow us to decisively grant or deny access to the ISATAP routing features on the DirectAccess server. Creating a security group and DNS record First, let's create a new security group in Active Directory. This is just a normal group like any other, typically a global or universal, whichever you prefer. This group is going to contain the computer accounts of the internal computers to which we want to give that outbound reaching capability. So, typically the computer accounts that we will eventually add into this group are Helpdesk computers, SCCM servers, maybe a management "jump box" terminal server, that kind of thing. To keep things intuitive, let's name the group DirectAccess – ISATAP computers or something similar. We will also need a DNS host record created. For obvious reasons we don't want to call this ISATAP, but perhaps something such as Contoso_ISATAP, swapping your name in, of course. This is just a regular DNS A record, and you want to point it at the internal IPv4 address of the DirectAccess server. If you are running a clustered array of DirectAccess servers that are configured for load balancing, then you will need multiple DNS records. All of the records have the same name,Contoso_ISATAP, and you point them at each internal IP address being used by the cluster. So, one gets pointed at the internal Virtual IP (VIP), and one gets pointed at each of the internal Dedicated IPs (DIP). In a two-node cluster, you will have three DNS records for Contoso_ISATAP. Creating the GPO Now go ahead and follow these steps to create a new GPO that is going to contain the ISATAP connection settings: Create a new GPO, name it something such as DirectAccess – ISATAP settings. Place the GPO wherever it makes sense to you, keeping in mind that these settings should only apply to the ISATAP group that we created. One way to manage the distribution of these settings is a strategically placed link. Probably the better way to handle distribution of these settings is to reflect the same approach that the DirectAccess GPOs themselves take. This would mean linking this new GPO at a high level, like the top of the domain, and then using security filtering inside the GPO settings so that it only applies to the group that we just created. This way you ensure that in the end the only computers to receive these settings are the ones that you add to your ISATAP group. I typically take the Security Filtering approach, because it closely reflects what DirectAccess itself does with GPO filtering. So, create and link the GPO at a high level, and then inside the GPO properties, go ahead and add the group (and only the group, remove everything else) to the Security Filtering section, like what is shown in the following screenshot: Then move over to the Details tab and set the GPO Status to User configuration settings disabled. Configuring the GPO Now that we have a GPO which is being applied only to our special ISATAP group that we created, let's give it some settings to apply. What we are doing with this GPO is configuring those computers which we want to be ISATAP-connected with the ISATAP server address with which they need to communicate , which is the DNS name that we created for ISATAP. First, edit the GPO and set the ISATAP Router Name by configuring the following setting: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP Router Name = Enabled (and populate your DNS record). Second, in the same location within the GPO, we want to enable your ISATAP state with the following configuration: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP State = Enabled State. Adding machines to the group All of our settings are now squared away, time to test! Start by taking a single computer account of a computer inside your network from which you want to be able to reach out to DirectAccess client computers, and add the computer account to the group that we created. Perhaps pause for a few minutes to ensure Active Directory has a chance to replicate, and then simply restart that computer. The next time it boots, it'll grab those settings from the GPO, and reach out to the DirectAccess server acting as the ISATAP router, and have an ISATAP IPv6 connection. If you generate an ipconfig /all on that internal machine, you will see the ISATAP adapter and address now listed as shown in the following screenshot: And now if you try to ping the name of a client computer that is connected via DirectAccess, you will see that it now resolves to the IPv6 record. Perfect! But wait, why are my pings timing out? Let's explore that next.
Read more
  • 0
  • 2
  • 14604

article-image-writing-reddit-reader-rxphp
Packt
09 Jan 2017
9 min read
Save for later

Writing a Reddit Reader with RxPHP

Packt
09 Jan 2017
9 min read
In this article by Martin Sikora, author of the book, PHP Reactive Programming, we will cover writing a CLI Reddit reader app using RxPHP, and we will see how Disposables are used in the default classes that come with RxPHP, and how these are going to be useful for unsubscribing from Observables in our app. (For more resources related to this topic, see here.) Examining RxPHP's internals As we know, Disposables as a means for releasing resources used by Observers, Observables, Subjects, and so on. In practice, a Disposable is returned, for example, when subscribing to an Observable. Consider the following code from the default RxObservable::subscribe() method: function subscribe(ObserverI $observer, $scheduler = null) { $this->observers[] = $observer; $this->started = true; return new CallbackDisposable(function () use ($observer) { $this->removeObserver($observer); }); } This method first adds the Observer to the array of all subscribed Observers. It then marks this Observable as started and, at the end, it returns a new instance of the CallbackDisposable class, which takes a Closure as an argument and invokes it when it's disposed. This is probably the most common use case for Disposables. This Disposable just removes the Observer from the array of subscribers and therefore, it receives no more events emitted from this Observable. A closer look at subscribing to Observables It should be obvious that Observables need to work in such way that all subscribed Observables iterate. Then, also unsubscribing via a Disposable will need to remove one particular Observer from the array of all subscribed Observables. However, if we have a look at how most of the default Observables work, we find out that they always override the Observable::subscribe() method and usually completely omit the part where it should hold an array of subscribers. Instead, they just emit all available values to the subscribed Observer and finish with the onComplete() signal immediately after that. For example, we can have a look at the actual source code of the subscribe() method of the RxReturnObservable class: function subscribe(ObserverI $obs, SchedulerI $sched = null) { $value = $this->value; $scheduler = $scheduler ?: new ImmediateScheduler(); $disp = new CompositeDisposable(); $disp->add($scheduler->schedule(function() use ($obs, $val) { $obs->onNext($val); })); $disp->add($scheduler->schedule(function() use ($obs) { $obs->onCompleted(); })); return $disp; } The ReturnObservable class takes a single value in its constructor and emits this value to every Observer as they subscribe. The following is a nice example of how the lifecycle of an Observable might look: When an Observer subscribes, it checks whether a Scheduler was also passed as an argument. Usually, it's not, so it creates an instance of ImmediateScheduler. Then, an instance of CompositeDisposable is created, which is going to keep an array of all Disposables used by this method. When calling CompositeDisposable::dispose(), it iterates all disposables it contains and calls their respective dispose() methods. Right after that we start populating our CompositeDisposable with the following: $disposable->add($scheduler->schedule(function() { ... })); This is something we'll see very often. SchedulerInterface::schedule() returns a DisposableInterface, which is responsible for unsubscribing and releasing resources. In this case, when we're using ImmediateScheduler, which has no other logic, it just evaluates the Closure immediately: function () use ($obs, $val) { $observer->onNext($val); } Since ImmediateScheduler::schedule() doesn't need to release any resources (it didn't use any), it just returns an instance of RxDisposableEmptyDisposable that does literally nothing. Then the Disposable is returned, and could be used to unsubscribe from this Observable. However, as we saw in the preceding source code, this Observable doesn't let you unsubscribe, and if we think about it, it doesn't even make sense because ReturnObservable class's value is emitted immediately on subscription. The same applies to other similar Observables, such as IteratorObservable, RangeObservable or ArrayObservable. These just contain recursive calls with Schedulers but the principle is the same. A good question is, why on Earth is this so complicated? All the preceding code does could be stripped into the following three lines (assuming we're not interested in using Schedulers): function subscribe(ObserverI $observer) { $observer->onNext($this->value); $observer->onCompleted(); } Well, for ReturnObservable this might be true, but in real applications, we very rarely use any of these primitive Observables. It's true that we usually don't even need to deal with Schedulers. However, the ability to unsubscribe from Observables or clean up any resources when unsubscribing is very important and we'll use it in a few moments. A closer look at Operator chains Before we start writing our Reddit reader, we should talk briefly about an interesting situation that might occur, so it doesn't catch us unprepared later. We're also going to introduce a new type of Observable, called ConnectableObservable. Consider this simple Operator chain with two subscribers: // rxphp_filters_observables.php use RxObservableRangeObservable; use RxObservableConnectableObservable; $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObs = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { return $val % 2;, }); $disposable1 = $filteredObs->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $filteredObs->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); The ConnectableObservable class is a special type of Observable that behaves similarly to Subject (in fact, internally, it really uses an instance of the Subject class). Any other Observable emits all available values right after you subscribe to it. However, ConnectableObservable takes another Observable as an argument and lets you subscribe Observers to it without emitting anything. When you call ConnectableObservable::connect(), it connects Observers with the source Observables, and all values go one by one to all subscribers. Internally, it contains an instance of the Subject class and when we called subscribe(), it just subscribed this Observable to its internal Subject. Then when we called the connect() method, it subscribed the internal Subject to the source Observable. In the $filteredObs variable we keep a reference to the last Observable returned from filter() call, which is an instance of AnnonymousObservable where, on next few lines, we subscribe both Observers. Now, let's see what this Operator chain prints: $ php rxphp_filters_observables.php S1: 1 S2: 1 S1: 9 S2: 9 S1: 25 S2: 25 As we can see, each value went through both Observers in the order they were emitted. Just out of curiosity, we can also have a look at what would happen if we didn't use ConnectableObservable, and used just the RangeObservable instead: $ php rxphp_filters_observables.php S1: 1 S1: 9 S1: 25 S2: 1 S2: 9 S2: 25 This time, RangeObservable emitted all values to the first Observer and then, again, all values to the second Observer. Right now, we can tell that the Observable had to generate all the values twice, which is inefficient, and with a large dataset, this might cause a performance bottleneck. Let's go back to the first example with ConnectableObservable, and modify the filter() call so it prints all the values that go through: $filteredObservable = $connObservable ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }); Now we run the code again and see what happens: $ php rxphp_filters_observables.php Filter: 0 Filter: 0 Filter: 1 S1: 1 Filter: 1 S2: 1 Filter: 4 Filter: 4 Filter: 9 S1: 9 Filter: 9 S2: 9 Filter: 16 Filter: 16 Filter: 25 S1: 25 Filter: 25 S2: 25 Well, this is unexpected! Each value is printed twice. This doesn't mean that the Observable had to generate all the values twice, however. It's not obvious at first sight what happened, but the problem is that we subscribed to the Observable at the end of the Operator chain. As stated previously, $filteredObservable is an instance of AnnonymousObservable that holds many nested Closures. By calling its subscribe() method, it runs a Closure that's created by its predecessor, and so on. This leads to the fact that every call to subscribe() has to invoke the entire chain. While this might not be an issue in many use cases, there are situations where we might want to do some special operation inside one of the filters. Also, note that calls to the subscribe() method might be out of our control, performed by another developer who wanted to use an Observable we created for them. It's good to know that such a situation might occur and could lead to unwanted behavior. It's sometimes hard to see what's going on inside Observables. It's very easy to get lost, especially when we have to deal with multiple Closures. Schedulers are prime examples. Feel free to experiment with the examples shown here and use debugger to examine step-by-step what code gets executed and in what order. So, let's figure out how to fix this. We don't want to subscribe at the end of the chain multiple times, so we can create an instance of Subject class, where we'll subscribe both Observers, and the Subject class itself will subscribe to the AnnonymousObservable as discussed a moment ago: // ... use RxSubjectSubject; $subject = new Subject(); $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObservable = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }) ->subscribe($subject); $disposable1 = $subject->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $subject->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); Now we can run the script again and see that it does what we wanted it to do: $ php rxphp_filters_observables.php Filter: 0 Filter: 1 S1: 1 S2: 1 Filter: 4 Filter: 9 S1: 9 S2: 9 Filter: 16 Filter: 25 S1: 25 S2: 25 This might look like an edge case, but soon we'll see that this issue, left unhandled, could lead to some very unpredictable behavior. We'll bring out both these issues (proper usage of Disposables and Operator chains) when we start writing our Reddit reader. Summary In this article, we looked in more depth at how to use Disposables and Operators, how these work internally, and what it means for us. We also looked at a couple of new classes from RxPHP, such as ConnectableObservable, and CompositeDisposable. Resources for Article: Further resources on this subject: Working with JSON in PHP jQuery [article] Working with Simple Associations using CakePHP [article] Searching Data using phpMyAdmin and MySQL [article]
Read more
  • 0
  • 0
  • 14589

Packt
24 Sep 2013
8 min read
Save for later

Quick start – using Haml

Packt
24 Sep 2013
8 min read
(For more resources related to this topic, see here.) Step 1 – integrating with Rails and creating a simple view file Let's create a simple rails application and change one of the view files to Haml from ERB: Create a new rails application named blog: rails new blog Add the Haml gem to this application's Gemfile and run it: bundle install When the gem has been added, generate some views that we can convert to Haml and learn about its basic features. Run the Rails scaffold generator to create a scffold named post. rails g scaffold post After this, you need to run the migrations to create the database tables: rake db:migrate You should get an output as shown in the following screenshot: Our application is not yet generating Haml views automatically. We will switch it to this mode in the next steps. The index.html.erb file that has been generated and is located in app/views/posts/index.html.erb looks as follows: <h1>Listing posts</h1><table> <tr> <th></th> <th></th> <th></th> </tr><% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %></table><br><%= link_to 'New Post', new_post_path %>] Let's convert it to an Haml view step-by-step. First, let's understand the basic features of Haml: Any HTML tags are written with a percent sign and then the name of the tag Whitespace (tabs) is being used to create nested tags Any part of an Haml line that is not interpreted as something else is taken to be plain text Closing tags as well as end statements are omitted for Ruby blocks Knowing the previous features we can write the first lines of our example view in Haml. Open the index.html.erb file in an editor and replace <h1>, <table>, <th>, and <tr> as follows: <h1>Listing posts</h1> can be written as %h1 Listing posts <table> can be written as %table <tr> becomes %tr <th> becomes %th After those first replacements our view file should look like: %h1 Listing posts%table %tr %th %th %th<% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %><br><%= link_to 'New Post', new_post_path %> Please notice how %tr is nested within the %table tag using a tab and also how %th is nested within %tr using a tab. Next, let's convert the Ruby parts of this view. Ruby is evaluated and its output is inserted into the view when using the equals character. In ERB we had to use <%=, whereas in Haml, this is shortened to just a =. The following examples illustrate this: <%= link_to 'Show', post %> becomes = link_to 'Show', post and all the other <%= parts are changed accordingly The equals sign can be used at the end of the tag to insert the Ruby code within that tag Empty (void) tags, such as <br>, are created by adding a forward slash at the end of the tag Please note that you have to leave a space after the equals sign. After the changes are incorporated, our view file will look like %h1 Listing posts %table %tr %th %th %th<% @posts.each do |post| %> %tr %td= link_to 'Show',post %td= link_to 'Edit',edit_post_path(post) %td= link_to 'Destroy',post,method: :delete,data: { confirm: 'Areyou sure?' }%br/= link_to 'New Post',new_post_path The only thing left to do now is to convert the Ruby block part: <% @posts.each do | post | %>. Code that needs to be run, but does not generate any output, is written using a hyphen character. Here is how this conversion works: Ruby blocks do not need to be closed, they end when the indentation decreases HTML tags and the Ruby code that is nested within the block are indented by one tab more than the block <% @posts.each do |post| %> becomes - @ posts.each do |post| Remember about a space after the hyphen character After we replace the remaining part in our view file according to the previous rules, it should look as follows: %h1 Listing posts%table %tr %th %th %th- @posts.each do |post| %tr %td= link_to 'Show', post %td= link_to 'Edit', edit_post_path(post) %td= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' }%br/= link_to 'New Post', new_post_path Save the view file and change its name to index.html.haml. This is now an Haml-based template. Start our example Rails application and visit http: //localhost:3000/posts to see the view being rendered by Rails, as shown in the following screenshot: Step 2 – switching Rails application to use Haml as the templating engine In the previous step, we have enabled Haml in the test application. However, if you generate new view files using any of the Rails built-in generators, it will still use ERB. Let's switch the application to use Haml as the templating engine. Edit the blog application Gemfile and add a gem named haml-rails to it. You can add it to the :development group because the generators are only used during development and this functionality is not needed in production or test environments. Our application Gemfile now looks as shown in the following code: source 'https://rubygems.org'gem 'rails', '3.2.13'gem 'sqlite3'gem 'haml'gem 'haml-rails', :group => :developmentgroup :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3'endgem 'jquery-rails' Then run following bundle command to install the gem: bundle install Let's say the posts in our application need to have categories. Run the scaffold generator to create some views for categories. This generator will create views using Haml, as shown in the following screenshot: Please note that new views have a .html.haml extension and are using Haml. For example, the _form.html.haml view for the form looks as follows: = form_for @category do |f|Asjd12As - if @category.errors.any? #error_explanation %h2= "#{pluralize(@category.errors.count, "error")} prohibitedthis category from being saved:" %ul - @category.errors.full_messages.each do |msg| %li= msg .actions = f.submit 'Save' There are two very useful shorthand notations for creating a <div> tag with a class or a <div> tag with an ID. To create a div with an ID, use the hash symbol followed by the name of the ID. For example, #error_explanation will result in <div id="error_explanation"> To create a <div>tag with a class attribute, use a dot followed by the name of the class. For example, .actions will create <div class="actions"> Step 3 – converting existing view templates to Haml Our example blog app still has some leftover templates which are using ERB as well as an application.html.erb layout file. We would like to convert those to Haml. There is no need to do it all individually, because there is a handy gem which will automatically convert all the ERB files to Haml, shown as follows: Let's install the html2haml gem: gem install html2haml Using the cd command, change the current working directory to the app directory of our example application and run the following bash command to convert all the ERB files to Haml (to run this command you need a bash shell. On Windows, you can use the embedded bash shell which ships with GitHub for Windows, Cygwin bash, MinGW bash, or the MSYS bash shell which is bundled with Git for Windows). for file in $(find . -type f -name \*.html.erb); do html2haml -e ${file} "$(dirname ${file})/$(basename ${file}.erb).haml";done Then to remove the ERB files and run this command: find ./ -name *erb | while read line; do rm $line; done Those two Bash snippets will first convert all the ERB files recursively in the app directory of our application and then remove the remaining ERB view templates. Summary This article covered integrating with Rails and creating a simple view file, switching Rails application to use Haml as the templating engine, and converting existing view templates to Haml. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 14585
article-image-angular-2-components-what-you-need-know
David Meza
08 Jun 2016
10 min read
Save for later

Angular 2 Components: What You Need to Know

David Meza
08 Jun 2016
10 min read
From the 7th to the 13th of November you can save up to 80% on some of our very best Angular content - along with our hottest React eBooks and video courses. If you're curious about the cutting-edge of modern web development we think you should click here and invest in your skills... The Angular team introduced quite a few changes in version 2 of the framework, and components are one of the important ones. If you are familiar with Angular 1 applications, components are actually a form of directives that are extended with template-oriented features. In addition, components are optimized for better performance and simpler configuration than a directive as Angular doesn’t support all its features. Also, while a component is technically a directive, it is so distinctive and central to Angular 2 applications that you’ll find that it is often separated as a different ingredient for the architecture of an application. So, what is a component? In simple words, a component is a building block of an application that controls a part of your screen real estate or your “view”. It does one thing, and it does it well. For example, you may have a component to display a list of active chats in a messaging app (which, in turn, may have child components to display the details of the chat or the actual conversation). Or you may have an input field that uses Angular’s two-way data binding to keep your markup in sync with your JavaScript code. Or, at the most elementary level, you may have a component that substitutes an HTML template with no special functionality just because you wanted to break down something complex into smaller, more manageable parts. Now, I don’t believe too much in learning something by only reading about it, so let’s get your hands dirty and write your own component to see some sample usage. I will assume that you already have Typescript installed and have done the initial configuration required for any Angular 2 app. If you haven’t, you can check out how to do so by clicking on this link. You may have already seen a component at its most basic level: import {Component} from 'angular2/core'; @Component({ selector: 'my-app', template: '<h1>{{ title }}</h1>' }) export class AppComponent { title = 'Hello World!'; } That’s it! That’s all you really need to have a component. Three things are happening here: You are importing the Component class from the Angular 2 core package. You are using a Typescript decorator to attach some metadata to your AppComponent class. If you don’t know what a decorator is, it is simply a function that extends your class with Angular code so that it becomes an Angular component. Otherwise, it would just be a plain class with no relation to the Angular framework. In the options, you defined a selector, which is the tag name used in the HTML code so that Angular can find where to insert your component, and a template, which is applied to the inner contents of the selector tag. You may notice that we also used interpolation to bind the component data and display the value of the public variable in the template. You are exporting your AppComponent class so that you can import it elsewhere (in this case, you would import it in your main script so that you can bootstrap your application). That’s a good start, but let’s get into a more complex example that showcases other powerful features of Angular and Typescript/ES2015. In the following example, I've decided to stuff everything into one component. However, if you'd like to stick to best practices and divide the code into different components and services or if you get lost at any point, you can check out the finished/refactored example here. Without any further ado, let’s make a quick page that displays a list of products. Let’s start with the index: <html> <head> <title>Products</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <script src="node_modules/es6-shim/es6-shim.min.js"></script> <script src="node_modules/systemjs/dist/system-polyfills.js"></script> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <link rel="stylesheet" href="styles.css"> <script> System.config({ packages: { app: { format: 'register', defaultExtension: 'js' } } }); System.import('app/main') .then(null, console.error.bind(console)); </script> </head> <body> <my-app>Loading...</my-app> </body> </html> There’s nothing out of the ordinary going on here. You are just importing all of the necessary scripts for your application to work as demonstrated in the quick-start. The app/main.ts file should already look somewhat similar to this: import {bootstrap} from ‘angular2/platform/browser’ import {AppComponent} from ‘./app.component’ bootstrap(AppComponent); Here, we imported the bootstrap function from the Angular 2 package and an AppComponent class from the local directory. Then, we initialized the application. First, create a product class that defines the constructor and type definition of any products made. Then, create app/product.ts, as follows: export class Product { id: number; price: number; name: string; } Next, you will create an app.component.ts file, which is where the magic happens. I've decided to stuff everything in here for demonstration purposes, but ideally, you would want to extract the products array into its own service, the HTML template into its own file, and the product details into its own component. This is how the component will look: import {Component} from 'angular2/core'; import {Product} from './product' @Component({ selector: 'my-app', template: ` <h1>{{title}}</h1> <ul class="products"> <li *ngFor="#product of products" [class.selected]="product === selectedProduct" (click)="onSelect(product)"> <span class="badge">{{product.id}}</span> {{product.name}} </li> </ul> <div *ngIf="selectedProduct"> <h2>{{selectedProduct.name}} details!</h2> <div><label>id: </label>{{selectedProduct.id}}</div> <div><label>Price: </label>{{selectedProduct.price | currency: 'USD': true }}</div> <div> <label>name: </label> <input [(ngModel)]="selectedProduct.name" placeholder="name"/> </div> </div> `, styleUrls: ['app/app.component.css'] }) export class AppComponent { title = 'My Products'; products = PRODUCTS; selectedProduct: Product; onSelect(product: Product) { this.selectedProduct = product; } } const PRODUCTS: Product[] = [ { "id": 1, "price": 45.12, "name": "TV Stand" }, { "id": 2, "price": 25.12, "name": "BBQ Grill" }, { "id": 3, "price": 43.12, "name": "Magic Carpet" }, { "id": 4, "price": 12.12, "name": "Instant liquidifier" }, { "id": 5, "price": 9.12, "name": "Box of puppies" }, { "id": 6, "price": 7.34, "name": "Laptop Desk" }, { "id": 7, "price": 5.34, "name": "Water Heater" }, { "id": 8, "price": 4.34, "name": "Smart Microwave" }, { "id": 9, "price": 93.34, "name": "Circus Elephant" }, { "id": 10, "price": 87.34, "name": "Tinted Window" } ]; The app/app.component.css file will look something similar to this: .selected { background-color: #CFD8DC !important; color: white; } .products { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .products li { position: relative; min-height: 2em; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: .3em 0; border-radius: 4px; font-size: 16px; overflow: hidden; white-space: nowrap; text-overflow: ellipsis; color: #3F51B5; display: block; width: 100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease; -o-transition: all 0.3s ease; -ms-transition: all 0.3s ease; transition: all 0.3s ease; } .products li.selected:hover { background-color: #BBD8DC !important; color: white; } .products li:hover { color: #607D8B; background-color: #DDD; left: .1em; color: #3F51B5; text-decoration: none; font-size: 1.2em; background-color: rgba(0,0,0,0.01); } .products .text { position: relative; top: -3px; } .products .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #607D8B; line-height: 1em; position: relative; left: -1px; top: 0; height: 2em; margin-right: .8em; border-radius: 4px 0 0 4px; } I'll explain what is happening: We imported from Component so that we can decorate your new component and imported Product so that we can create an array of products and have access to Typescript type infererences. We decorated our component with a “my-app” selector property, which finds <my-app></my-app> tags and inserts our component there. I decided to define the template in this file instead of using a URL so that I can demonstrate how handy the ES2015 template string syntax is (no more long strings or plus-separated strings). Finally, the styleUrls property uses an absolute file path, and any styles applied will only affect the template in this scope. The actual component only has a few properties outside of the decorator configuration. It has a title that you can bind to the template, a products array that will iterate in the markup, a selectedProduct variable that is a scope variable that will initialize as undefined and an onSelect method that will be run every time you click on a list item. Finally, define a constant (const because I've hardcoded it in and it won't change in runtime) PRODUCTS array to mock an object that is usually returned by a service after an external request. Also worth noting are the following: As you are using Typescript, you can make inferences about what type of data your variables will hold. For example, you may have noticed that I defined the Product type whenever I knew that this the only kind of object I want to allow for this variable or to be passed to a function. Angular 2 has different property prefixes, and if you would like to learn when to use each one, you can check out this Stack Overflow question. That's it! You now have a bit more complex component that has a particular functionality. As I previously mentioned, this could be refactored, and that would look something similar to this: import {Component, OnInit} from 'angular2/core'; import {Product} from './product'; import {ProductDetailComponent} from './product-detail.component'; import {ProductService} from './product.service'; @Component({ selector: 'my-app', templateUrl: 'app/app.component.html', styleUrls: ['app/app.component.css'], directives: [ProductDetailComponent], providers: [ProductService] }) export class AppComponent implements OnInit { title = 'Products'; products: Product[]; selectedProduct: Product; constructor(private _productService: ProductService) { } getProducts() { this._productService.getProducts().then(products => this.products = products); } ngOnInit() { this.getProducts(); } onSelect(product: Product) { this.selectedProduct = product; } } In this example, you get your product data from a service and separate the product detail template into a child component, which is much more modular. I hope you've enjoyed reading this post. About this author David Meza is an AngularJS developer at the City of Raleigh. He is passionate about software engineering and learning new programming languages and frameworks. He is most familiar working with Ruby, Rails, and PostgreSQL in the backend and HTML5, CSS3, JavaScript, and AngularJS in the frontend. He can be found at here.
Read more
  • 0
  • 0
  • 14527

article-image-debugging
Packt
14 Jan 2016
11 min read
Save for later

Debugging

Packt
14 Jan 2016
11 min read
In this article by Brett Ohland and Jayant Varma, the authors of Xcode 7 Essentials (Second Edition), we learn about the debugging process, as the Grace Hopper had a remarkable career. She not only became a Rear Admiral in the U.S. Navy but also contributed to the field of computer science in many important ways during her lifetime. She was one of the first programmers on an early computer called Mark I during World War II. She invented the first compiler and was the head of a group that created the FLOW-MATIC language, which would later be extended to create the COBOL programming language. How does this relate to debugging? Well, in 1947, she was leading a team of engineers who were creating the successor of the Mark I computer (called Mark II) when the computer stopped working correctly. The culprit turned out to be a moth that had managed to fly into, and block the operation of, an important relay inside of the computer; she remarked that they had successfully debugged the system and went on to popularize the term. The moth and the log book page that it is attached to are on display in The Smithsonian Museum of American History in Washington D.C. While physical bugs in your systems are an astronomically rare occurrence, software bugs are extremely common. No developer sets out to write a piece of code that doesn't act as expected and crash, but bugs are inevitable. Luckily, Xcode has an assortment of tools and utilities to help you become a great detective and exterminator. In this article, we will cover the following topics: Breakpoints The LLDB console Debugging the view hierarchy Tooltips and a Quick Look (For more resources related to this topic, see here.) Breakpoints The typical development cycle of writing code, compiling, and then running your app on a device or in the simulator doesn't give you much insight into the internal state of the program. Clicking the stop button will, as the name suggests, stop the program and remove it from the memory. If your app crashes, or if you notice that a string is formatted incorrectly or the wrong photo is loaded into a view, you have no idea what code caused the issue. Surely, you could use the print statement in your code to output values on the console, but as your application involves more and more moving parts, this becomes unmanageable. A better option is to create breakpoints. A breakpoint is simply a point in your source code at which the execution of your program will pause and put your IDE into a debugging mode. While in this mode, you can inspect any objects that are in the memory, see a trace of the program's execution flow, and even step the program into or over instructions in your source code. Creating a breakpoint is simple. In the standard editor, there is a gutter that is shown to the immediate left of your code. Most often, there are line numbers in this area, and clicking on a line number will add a blue arrow to that line. That is a breakpoint. If you don't see the line numbers in your gutter, simply open Xcode's settings by going to Xcode | Settings (or pressing the Cmd + , keyboard shortcut) and toggling the line numbers option in the editor section. Clicking on the breakpoint again will dim the arrow and disable the breakpoint. You can remove the breakpoint completely by clicking, dragging, and releasing the arrow indicator well outside of the gutter. A dust ball animation will let you know that it has been removed. You can also delete breakpoints by right-clicking on the arrow indicator and selecting Delete. The Standard Editor showing a breakpoint on line 14 Listing breakpoints You can see a list of all active or inactive breakpoints in your app by opening the breakpoint navigator in the sidebar to the left (Cmd + 7). The list will show the file and line number of each breakpoint. Selecting any of them will quickly take the source editor to that location. Using the + icon at the bottom of the sidebar will let you set many more types of advanced breakpoints, such as the Exception, Symbolic, Open GL Errors, and Test Failure breakpoints. More information about these types can be found on Apple's Developer site at https://developer.apple.com. The debug area When Xcode reaches a breakpoint, its debugging mode will become active. The debug navigator will appear in the sidebar on the left, and if you've printed any information onto the console, the debug area will automatically appear below the editor area: The buttons in the top bar of the debug area are as follows (from left to right): Hide/Show debug area: Toggles the visibility of the debug area Activate/Deactivate breakpoints: This icon activates or deactivates all breakpoints Step Over: Execute the current line or function and pause at the next line Step Into: This executes the current line and jumps to any functions that are called Step Out: This exits the current function and places you at the line of code that called the current function Debug view hierarchy: This shows you a stacked representation of the view hierarchy Location: Since the simulator doesn't have GPS, you can set its location here (Apple HQ, London, and City Bike Ride are some of them) Stack Frame selector: This lets you choose a frame in which the current breakpoint is running The main window of the debug area can be split into two panes: the variables view and the LLDB console. The variables view The variables view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address, as shown in the following screenshot: This view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address. For collection types (arrays and dictionaries), you have the ability to drill down to the contents of the collection by toggling the arrow indicator on the left-hand side of each instance. In the bottom toolbar, there are three sections: Scope selector: This let's you toggle between showing the current scope, showing the global scope, or setting it to auto to select for you Information icons: The Quick Look icon will show quick look information in a popup, while the print information button will print the instance's description on the console Filter area: You can filter the list of values using this standard filter input box The console area The console area is the area where the system will place all system-generated messages as well as any messages that are printed using the print or NSLog statements. These messages will be displayed only while the application is running. While you are in debug mode, the console area becomes an interactive console in the LLDB debugger. This interactive mode lets you print the values of instances in memory, run debugger commands, inspect code, evaluate code, step through, and even skip code. As Xcode has matured over the years, more and more of the advanced information available for you in the console has become accessible in the GUI. Two important and useful commands for use within the console are p and po: po: Prints the description of the object p: Prints the value of an object Depending on the type of variable or constant, p and po may give different information. As an example, let's take a UITableViewCell that was created in our showcase app, now place a breakpoint in the tableView:cellForRowAtIndexPath method in the ExamleTableView class: (lldb) p cell (UITableViewCell) $R0 = 0x7a8f0000 {   UIView = {     UIResponder = {       NSObject = {         isa = 0x7a8f0000       }       _hasAlternateNextResponder = ' '       _hasInputAssistantItem = ' '     }     ... removed 100 lines  (lldb) po cell <UITableViewCell: 0x7a8f0000; frame = (0 0; 320 44); text = 'Helium'; clipsToBounds = YES; autoresize = W; layer = <CALayer: 0x7a6a02c0>> The p has printed out a lot of detail about the object while po has displayed a curated list of information. It's good to know and use each of these commands; each one displays different information. The debug navigator To open the debug navigator, you can click on its icon in the navigator sidebar or use the Cmd + 6 keyboard shortcut. This navigator will show data only while you are in debug mode: The navigator has several groups of information: The gauges area: At a glance, this shows information about how your application is using the CPU, Memory, Disk, Network, and iCloud (only when your app is using it) resources. Clicking on each gauge will show more details in the standard editor area. The processes area: This lists all currently active threads and their processes. This is important information for advanced debugging. The information in the gauges area used to be accessible only if you ran your application in and attached the running process to a separate instruments application. Because of the extra step in running our app in this separate application, Apple started including this information inside of Xcode starting from Xcode 6. This information is invaluable for spotting issues in your application, such as memory leaks, or spotting inefficient code by watching the CPU usage. The preceding screenshot shows the Memory Report screen. Because this is running in the simulator, the amount of RAM available for your application is the amount available on your system. The gauge on the left-hand side shows the current usage as a percentage of the available RAM. The Usage Comparison area shows how much RAM your application is using compared to other processes and the available free space. Finally, the bottom area shows a graph of memory usage over time. Each of these pieces of information is useful for the debugging of your application for crashes and performance. Quick Look There, we were able to see from within the standard editor the contents of variables and constants. While Xcode is in debug mode, we have this very ability. Simply hover the mouse over most instance variables and a Quick Look popup will appear to show you a graphical representation, like this: Quick Look knows how to handle built-in classes, such as CGRect or UIImage, but what about custom classes that you create? Let's say that you create a class representation of an Employee object. What information would be the best way to visualize an instance? You could show the employee's photo, their name, their employee number, and so on. Since the system can't judge what information is important, it will rely on the developer to decide. The debugQuickLookObject() method can be added to any class, and any object or value that is returned will show up within the Quick Look popup: func debugQuickLookObject() -> AnyObject {     return "This is the custom preview text" } Suppose we were to add that code to the ViewController class and put a breakpoint on the super.viewDidLoad() line: import UIKit class ViewController: UIViewController {       override func viewDidLoad() {         super.viewDidLoad()         // Do any additional setup after loading the view, typically from a nib.     }       override func didReceiveMemoryWarning() {         super.didReceiveMemoryWarning()         // Dispose of any resources that can be recreated.     }       func debugQuickLookObject() -> AnyObject {         return "I am a quick look value"     }   } Now, in the variables list in the debug area, you can click on the Quick Look icon, and the following screen will appear: Summary Debugging is an art as well as a science. Because of this, it easily can—and does—take entire books to cover the subject. The best way to learn techniques and tools is by experimenting on your own code. Resources for Article:   Further resources on this subject: Introduction to GameMaker: Studio [article] Exploring Swift [article] Using Protocols and Protocol Extensions [article]
Read more
  • 0
  • 0
  • 14482

article-image-audio-processing-and-generation-maxmsp
Packt
25 Nov 2014
19 min read
Save for later

Audio Processing and Generation in Max/MSP

Packt
25 Nov 2014
19 min read
In this article, by Patrik Lechner, the author of Multimedia Programming Using Max/MSP and TouchDesigner, focuses on the audio-specific examples. We will take a look at the following audio processing and generation techniques: Additive synthesis Subtractive synthesis Sampling Wave shaping Nearly every example provided here might be understood very intuitively or taken apart in hours of math and calculation. It's up to you how deep you want to go, but in order to develop some intuition; we'll have to be using some amount of Digital Signal Processing (DSP) theory. We will briefly cover the DSP theory, but it is highly recommended that you study its fundamentals deeper to clearly understand this scientific topic in case you are not familiar with it already. (For more resources related to this topic, see here.) Basic audio principles We already saw and stated that it's important to know, see, and hear what's happening along a signal way. If we work in the realm of audio, there are four most important ways to measure a signal, which are conceptually partly very different and offer a very broad perspective on audio signals if we always have all of them in the back of our heads. These are the following important ways: Numbers (actual sample values) Levels (such as RMS, LUFS, and dB FS) Transversal waves (waveform displays, so oscilloscopes) Spectra (an analysis of frequency components) There are many more ways to think about audio or signals in general, but these are the most common and important ones. Let's use them inside Max right away to observe their different behavior. We'll feed some very basic signals into them: DC offset, a sinusoid, and noise. The one that might surprise you the most and get you thinking is the constant signal or DC offset (if it's digital-analog converted). In the following screenshot, you can see how the different displays react: In general, one might think, we don't want any constant signals at all; we don't want any DC offset. However, we will use audio signals a lot to control things later, say, an LFO or sequencers that should run with great timing accuracy. Also, sometimes, we just add a DC offset to our audio streams by accident. You can see in the preceding screenshot, that a very slowly moving or constant signal can be observed best by looking at its value directly, for example, using the [number~] object. In a level display, the [meter~] or [levelmeter~] objects will seem to imply that the incoming signal is very loud, in fact, it should be at -6 dB Full Scale (FS). As it is very loud, we just can't hear anything since the frequency is infinitely low. This is reflected by the spectrum display too; we see a very low frequency at -6 dB. In theory, we should just see an infinitely thin spike at 0 Hz, so everything else can be considered an (inevitable but reducible) measuring error. Audio synthesis Awareness of these possibilities of viewing a signal and their constraints, and knowing how they actually work, will greatly increase our productivity. So let's get to actually synthesizing some waveforms. A good example of different views of a signal operation is Amplitude Modulation (AM); we will also try to formulate some other general principles using the example of AM. Amplitude modulation Amplitude modulation means the multiplication of a signal with an oscillator. This provides a method of generating sidebands, which is partial in a very easy, intuitive, and CPU-efficient way. Amplitude modulation seems like a word that has a very broad meaning and can be used as soon as we change a signal's amplitude by another signal. While this might be true, in the context of audio synthesis, it very specifically means the multiplication of two (most often sine) oscillators. Moreover, there is a distinction between AM and Ring Modulation. But before we get to this distinction, let's look at the following simple multiplication of two sine waves, and we are first going to look at the result in an oscilloscope as a wave: So in the preceding screenshot, we can see the two sine waves and their product. If we imagine every pair of samples being multiplied, the operation seems pretty intuitive as the result is what we would expect. But what does this resulting wave really mean besides looking like a product of two sine waves? What does it sound like? The wave seems to have stayed in there certainly, right? Well, viewing the product as a wave and looking at the whole process in the time domain rather than the frequency domain is helpful but slightly misleading. So let's jump over to the following frequency domain and look what's happening with the spectrum: So we can observe here that if we multiply a sine wave a with a sine wave b, a having a frequency of 1000 Hz and b a frequency of 100 Hz, we end up with two sine waves, one at 900 Hz and another at 1100 Hz. The original sine waves have disappeared. In general, we can say that the result of multiplying a and b is equal to adding and subtracting the frequencies. This is shown in the Equivalence to Sum and difference subpatcher (in the following screenshot, the two inlets to the spectrum display overlap completely, which might be hard to see): So in the preceding screenshot, you see a basic AM patcher that produces sidebands that we can predict quite easily. Multiplication is commutative; you will say, 1000 + 100 = 1100, 1000 - 100 = 900; that's alright, but what about 100 - 1000 and 100 + 1000? We get -900 and 1100 once again? It still works out, and the fact that it does has to do with negative frequencies, or the symmetry of a real frequency spectrum around 0. So you can see that the two ways of looking at our signal and thinking about AM lend different opportunities and pitfalls. Here is another way to think about AM: it's the convolution of the two spectra. We didn't talk about convolution yet; we will at a later point. But keep it in mind or do a little research on your own; this aspect of AM is yet another interesting one. Ring modulation versus amplitude modulation The difference between ring modulation and what we call AM in this context is that the former one uses a bipolar modulator and the latter one uses a unipolar one. So actually, this is just about scaling and offsetting one of the factors. The difference in the outcome is yet a big one; if we keep one oscillator unipolar, the other one will be present in the outcome. If we do so, it starts making sense to call one oscillator on the carrier and the other (unipolar) on the modulator. Also, it therefore introduces a modulation depth that controls the amplitude of the sidebands. In the following screenshot, you can see the resulting spectrum; we have the original signal, so the carrier plus two sidebands, which are the original signals, are shifted up and down: Therefore, you can see that AM has a possibility to roughen up our spectrum, which means we can use it to let through an original spectrum and add sidebands. Tremolo Tremolo (from the Latin word tremare, to shake or tremble) is a musical term, which means to change a sound's amplitude in regular short intervals. Many people confuse it with vibrato, which is a modulating pitch at regular intervals. AM is tremolo and FM is vibrato, and as a simple reminder, think that the V of vibrato is closer to the F of FM than to the A of AM. So multiplying the two oscillators' results in a different spectrum. But of course, we can also use multiplication to scale a signal and to change its amplitude. If we wanted to have a sine wave that has a tremolo, that is an oscillating variation in amplitude, with, say, a frequency of 1 Hertz, we would again multiply two sine waves, one with 1000 Hz for example and another with a frequency of 0.5 Hz. Why 0.5 Hz? Think about a sine wave; it has two peaks per cycle, a positive one and a negative one. We can visualize all that very well if we think about it in the time domain, looking at the result in an oscilloscope. But what about our view of the frequency domain? Well, let's go through it; when we multiply a sine with 1000 Hz and one with 0.5 Hz, we actually get two sine waves, one with 999.5 Hz and one with 100.5 Hz. Frequencies that close create beatings, since once in a while, their positive and negative peaks overlap, canceling out each other. In general, the frequency of the beating is defined by the difference in frequency, which is 1 Hz in this case. So we see, if we look at it this way, we come to the same result again of course, but this time, we actually think of two frequencies instead of one being attenuated. Lastly, we could have looked up trigonometric identities to anticipate what happens if we multiply two sine waves. We find the following: Here, φ and θ are the two angular frequencies multiplied by the time in seconds, for example: This is the equation for the 1000 Hz sine wave. Feedback Feedback always brings the complexity of a system to the next level. It can be used to stabilize a system, but can also make a given system unstable easily. In a strict sense, in the context of DSP, stability means that for a finite input to a system, we get finite output. Obviously, feedback can give us infinite output for a finite input. We can use attenuated feedback, for example, not only to make our AM patches recursive, adding more and more sidebands, but also to achieve some surprising results as we will see in a minute. Before we look at this application, let's quickly talk about feedback in general. In the digital domain, feedback always demands some amount of delay. This is because the evaluation of the chain of operations would otherwise resemble an infinite amount of operations on one sample. This is true for both the Max message domain (we get a stack overflow error if we use feedback without delaying or breaking the chain of events) and the MSP domain; audio will just stop working if we try it. So the minimum network for a feedback chain as a block diagram looks something like this: In the preceding screenshot, X is the input signal and x[n] is the current input sample; Y is the output signal and y[n] is the current output sample. In the block marked Z-m, i is a delay of m samples (m being a constant). Denoting a delay with Z-m comes from a mathematical construct named the Z-transform. The a term is also a constant used to attenuate the feedback circle. If no feedback is involved, it's sometimes helpful to think about block diagrams as processing whole signals. For example, if you think of a block diagram that consists only of multiplication with a constant, it would make a lot of sense to think of its output signal as a scaled version of the input signal. We wouldn't think of the network's processing or its output sample by sample. However, as soon as feedback is involved, without calculation or testing, this is the way we should think about the network. Before we look at the Max version of things, let's look at the difference equation of the network to get a better feeling of the notation. Try to find it yourself before looking at it too closely! In Max, or rather in MSP, we can introduce feedback as soon as we use a [tapin~] [tapout~] pair that introduces a delay. The minimum delay possible is the signal vector size. Another way is to simply use a [send~] and [receive~] pair in our loop. The [send~] and [receive~] pair will automatically introduce this minimum amount of delay if needed, so the delay will be introduced only if there is a feedback loop. If we need shorter delays and feedback, we have to go into the wonderful world of gen~. Here, our shortest delay time is one sample, and can be introduced via the [history] object. In the Fbdiagram.maxpat patcher, you can find a Max version, an MSP version, and a [gen~] version of our diagram. For the time being, let's just pretend that the gen domain is just another subpatcher/abstraction system that allows shorter delays with feedback and has a more limited set of objects that more or less work the same as the MSP ones. In the following screenshot, you can see the difference between the output of the MSP and the [gen~] domain. Obviously, the length of the delay time has quite an impact on the output. Also, don't forget that the MSP version's output will vary greatly depending on our vector size settings. Let's return to AM now. Feedback can, for example, be used to duplicate and shift our spectrum again and again. In the following screenshot, you can see a 1000 Hz sine wave that has been processed by a recursive AM to be duplicated and shifted up and down with a 100 Hz spacing: In the maybe surprising result, we can achieve with this technique is this: if we the modulating oscillator and the carrier have the same frequency, we end up with something that almost sounds like a sawtooth wave. Frequency modulation Frequency modulation or FM is a technique that allows us to create a lot of frequency components out of just two oscillators, which is why it was used a lot back in the days when oscillators were a rare, expensive good, or CPU performance was low. Still, especially when dealing with real-time synthesis, efficiency is a crucial factor, and the huge variety of sounds that can be achieved with just two oscillators and very few parameters can be very useful for live performance and so on. The idea of FM is of course to modulate an oscillator's frequency. The basic, admittedly useless form is depicted in the following screenshot: While trying to visualize what happens with the output in the time domain, we can imagine it as shown in the following screenshot. In the preceding screenshot, you can see the signal that is controlling the frequency. It is a sine wave with a frequency of 50 Hz, scaled and offset to range from -1000 to 5000, so the center or carrier frequency is 2000 Hz, which is modulated to an amount of 3000 Hz. You can see the output of the modulated oscillator in the following screenshot: If we extend the upper patch slightly, we end up with this: Although you can't see it in the screenshot, the sidebands are appearing with a 100 Hz spacing here, that is, with a spacing equal to the modulator's frequency. Pretty similar to AM right? But depending on the modulation amount, we get more and more sidebands. Controlling FM If the ratio between F(c) and F(m) is an integer, we end up with a harmonic spectrum, therefore, it may be more useful to rather control F(m) indirectly via a ratio parameter as it's done inside the SimpleRatioAndIndex subpatcher. Also, an Index parameter is typically introduced to make an FM patch even more controllable. The modulation index is defined as follows: Here, I is the index, Am is the amplitude of the modulation, what we called amount before, and fm is the modulator's frequency. So finally, after adding these two controls, we might arrive here: FM offers a wide range of possibilities, for example, the fact that we have a simple control for how harmonic/inharmonic our spectrum is can be useful to synthesize the mostly noisy attack phase of many instruments if we drive the ratio and index with an envelope as it's done in the SimpleEnvelopeDriven subpatcher. However, it's also very easy to synthesize very artificial, strange sounds. This basically has the following two reasons: Firstly, the partials appearing have amplitudes governed by Bessel functions that may seem quite unpredictable; the partials sometimes seem to have random amplitudes. Secondly, negative frequencies and fold back. If we generate partials with frequencies below 0 Hz, it is equivalent to creating the same positive frequency. For frequencies greater than the sample rate/2 (sample rate/2 is what's called the Nyquist rate), the frequencies reflect back into the spectrum that can be described by our sampling rate (this is an effect also called aliasing). So at a sampling rate of 44,100 Hz, a partial with a frequency of -100 Hz will appear at 100 Hz, and a partial with a frequency of 43100 kHz will appear at 1000 Hz, as shown in the following screenshot: So, for frequencies between the Nyquist frequency and the sampling frequency, what we hear is described by this: Here, fs is the sampling rate, f0 is the frequency we hear, and fi is the frequency we are trying to synthesize. Since FM leads to many partials, this effect can easily come up, and can both be used in an artistically interesting manner or sometimes appear as an unwanted error. In theory, an FM signal's partials extend to even infinity, but the amplitudes become negligibly small. If we want to reduce this behavior, the [poly~] object can be used to oversample the process, generating a bit more headroom for high frequencies. The phenomenon of aliasing can be understood by thinking of a real (in contrast to imaginary) digital signal as having a symmetrical and periodical spectrum; let's not go into too much detail here and look at it in the time domain: In the previous screenshot, we again tried to synthesize a sine wave with 43100 Hz (the dotted line) at a sampling rate of 44100 Hz. What we actually get is the straight black line, a sine with 1000 Hz. Each big black dot represents an actual sample, and there is only one single band-limited signal connecting them: the 1000 Hz wave that is only partly visible here (about half its wavelength). Feedback It is very common to use feedback with FM. We can even frequency modulate one oscillator with itself, making the algorithm even cheaper since we have only one table lookup. The idea of feedback FM quickly leads us to the idea of making networks of oscillators that can be modulated by each other, including feedback paths, but let's keep it simple for now. One might think that modulating one oscillator with itself should produce chaos; FM being a technique that is not the easiest to control, one shouldn't care for playing around with single operator feedback FM. But the opposite is the case. Single operator FM yields very predictable partials, as shown in the following screenshot, and in the Single OP FBFM subpatcher: Again, we are using a gen~ patch, since we want to create a feedback loop and are heading for a short delay in the loop. Note that we are using the [param] object to pass a message into the gen~ object. What should catch your attention is that although the carrier frequency has been adjusted to 1000 Hz, the fundamental frequency in the spectrum is around 600 Hz. What can help us here is switching to phase modulation. Phase modulation If you look at the gen~ patch in the previous screenshot, you see that we are driving our sine oscillator with a phasor. The cycle object's phase inlet assumes an input that ranges from 0 to 1 instead of from 0 to 2π, as one might think. To drive a sine wave through one full cycle in math, we can use a variable ranging from 0 to 2π, so in the following formula, you can imagine t being provided by a phasor, which is the running phase. The 2π multiplication isn't necessary in Max since if we are using [cycle~], we are reading out a wavetable actually instead of really computing the sine or cosine of the input. This is the most common form of denoting a running sinusoid with frequency f0 and phase φ. Try to come up with a formula that describes frequency modulation! Simplifying the phases by setting it to zero, we can denote FM as follows: This can be shown to be nearly identical to the following formula: Here, f0 is the frequency of the carrier, fm is the frequency of the modulator, and A is the modulation amount. Welcome to phase modulation. If you compare it, the previous formula actually just inserts a scaled sine wave where the phase φ used to be. So phase modulation is nearly identical to frequency modulation. Phase modulation has some advantages though, such as providing us with an easy method of synchronizing multiple oscillators. But let's go back to the Max side of things and look at a feedback phase modulation patch right away (ignoring simple phase modulation, since it really is so similar to FM): This gen~ patcher resides inside the One OP FBPM subpatcher and implements phase modulation using one oscillator and feedback. Interestingly, the spectrum is very similar to the one of a sawtooth wave, with the feedback amount having a similar effect of a low-pass filter, controlling the amount of partials. If you take a look at the subpatcher, you'll find the following three sound sources: Our feedback FM gen~ patcher A [saw~] object for comparison A poly~ object We have already mentioned the problem of aliasing and the [poly~] object has already been proposed to treat the problem. However, it allows us to define the quality of parts of patches in general, so let's talk about the object a bit before moving on since we will make great use of it. Before moving on, I would like to tell you that you can double-click on it to see what is loaded inside, and you will see that the subpatcher we just discussed contains a [poly~] object that contains yet another version of our gen~ patcher. Summary In this article, we've finally come to talking about audio. We've introduced some very common techniques and thought about refining them and getting things done properly and efficiently (think about poly~). By now, you should feel quite comfortable building synths that mix techniques such as FM, subtractive synthesis, and feature modulation, as well as using matrices for routing both audio and modulation signals where you need them. Further resources on this subject: Moodle for Online Communities [Article] Techniques for Creating a Multimedia Database [Article] Moodle 2.0 Multimedia: Working with 2D and 3D Maps [Article]
Read more
  • 0
  • 0
  • 14435
article-image-writing-modules
Packt
14 Aug 2017
15 min read
Save for later

Writing Modules

Packt
14 Aug 2017
15 min read
In this article, David Mark Clements, the author of the book, Node.js Cookbook, we will be covering the following points to introduce you to using Node.js  for exploratory data analysis: Node's module system Initializing a module Writing a module Tooling around modules Publishing modules Setting up a private module repository Best practices (For more resources related to this topic, see here.) In idiomatic Node, the module is the fundamental unit of logic. Any typical application or system consists of generic code and application code. As a best practice, generic shareable code should be held in discrete modules, which can be composed together at the application level with minimal amounts of domain-specific logic. In this article, we'll learn how Node's module system works, how to create modules for various scenarios, and how we can reuse and share our code. Scaffolding a module Let's begin our exploration by setting up a typical file and directory structure for a Node module. At the same time, we'll be learning how to automatically generate a package.json file (we refer to this throughout as initializing a folder as a package) and to configure npm (Node's package managing tool) with some defaults, which can then be used as part of the package generation process. In this recipe, we'll create the initial scaffolding for a full Node module. Getting ready Installing Node If we don't already have Node installed, we can go to https://nodejs.org to pick up the latest version for our operating system. If Node is on our system, then so is the npm executable; npm is the default package manager for Node. It's useful for creating, managing, installing, and publishing modules. Before we run any commands, let's tweak the npm configuration a little: npm config set init.author.name "<name here>" This will speed up module creation and ensure that each package we create has a consistent author name, thus avoiding typos and variations of our name. npm stands for... Contrary to popular belief, npm is not an acronym for Node Package Manager; in fact, it stands for npm is Not An Acronym, which is why it's not called NINAA. How to do it… Let's say we want to create a module that converts HSL (hue, saturation, luminosity) values into a hex-based RGB representation, such as will be used in CSS (for example,  #fb4a45 ). The name hsl-to-hex seems good, so let's make a new folder for our module and cd into it: mkdir hsl-to-hex cd hsl-to-hex Every Node module must have a package.json file, which holds metadata about the module. Instead of manually creating a package.json file, we can simply execute the following command in our newly created module folder: npm init This will ask a series of questions. We can hit enter for every question without supplying an answer. Note how the default module name corresponds to the current working directory, and the default author is the init.author.name value we set earlier. An npm init should look like this: Upon completion, we should have a package.json file that looks something like the following: { "name": "hsl-to-hex", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT" } How it works… When Node is installed on our system, npm comes bundled with it. The npm executable is written in JavaScript and runs on Node. The npm config command can be used to permanently alter settings. In our case, we changed the init.author.name setting so that npm init would reference it for the default during a module's initialization. We can list all the current configuration settings with npm config ls . Config Docs Refer to https://docs.npmjs.com/misc/config for all possible npm configuration settings. When we run npm init, the answers to prompts are stored in an object, serialized as JSON and then saved to a newly created package.json file in the current directory. There's more… Let's find out some more ways to automatically manage the content of the package.json file via the npm command. Reinitializing Sometimes additional metadata can be available after we've created a module. A typical scenario can arise when we initialize our module as a git repository and add a remote endpoint after creating the module. Git and GitHub If we've not used the git tool and GitHub before, we can refer to http://help.github.com to get started. If we don't have a GitHub account, we can head to http://github.com to get a free account. To demonstrate, let's create a GitHub repository for our module. Head to GitHub and click on the plus symbol in the top-right, then select New repository: Select New repository. Specify the name as hsl-to-hex and click on Create Repository. Back in the Terminal, inside our module folder, we can now run this: echo -e "node_modulesn*.log" > .gitignore git init git add . git commit -m '1st' git remote add origin http://github.com/<username>/hsl-to-hex git push -u origin master Now here comes the magic part; let's initialize again (simply press enter for every question): npm init This time the Git remote we just added was detected and became the default answer for the git repository question. Accepting this default answer meant that the repository, bugs, and homepage fields were added to package.json . A repository field in package.json is an important addition when it comes to publishing open source modules since it will be rendered as a link on the modules information page at http://npmjs.com. A repository link enables potential users to peruse the code prior to installation. Modules that can't be viewed before use are far less likely to be considered viable. Versioning The npm tool supplies other functionalities to help with module creation and management workflow. For instance, the npm version command can allow us to manage our module's version number according to SemVer semantics. SemVer SemVer is a versioning standard. A version consists of three numbers separated by a dot, for example, 2.4.16. The position of a number denotes specific information about the version in comparison to the other versions. The three positions are known as MAJOR.MINOR.PATCH. The PATCH number is increased when changes have been made that don't break the existing functionality or add any new functionality. For instance, a bug fix will be considered a patch. The MINOR number should be increased when new backward compatible functionality is added. For instance, the adding of a method. The MAJOR number increases when backwards-incompatible changes are made. Refer to http://semver.org/ for more information. If we were to a fix a bug, we would want to increase the PATCH number. We can either manually edit the version field in package.json , setting it to 1.0.1, or we can execute the following: npm version patch This will increase the version field in one command. Additionally, if our module is a Git repository, it will add a commit based on the version (in our case, v1.0.1), which we can then immediately push. When we ran the command, npm output the new version number. However, we can double-check the version number of our module without opening package.json: npm version This will output something similar to the following: { 'hsl-to-hex': '1.0.1', npm: '2.14.17', ares: '1.10.1-DEV', http_parser: '2.6.2', icu: '56.1', modules: '47', node: '5.7.0', openssl: '1.0.2f', uv: '1.8.0', v8: '4.6.85.31', zlib: '1.2.8' } The first field is our module along with its version number. If we added a new backwards-compatible functionality, we can run this: npm version minor Now our version is 1.1.0. Finally, we can run the following for a major version bump: npm version major This sets our modules version to 2.0.0. Since we're just experimenting and didn't make any changes, we should set our version back to 1.0.0. We can do this via the npm command as well: npm version 1.0.0 See also Refer to the following recipes: Writing module code Publishing a module Installing dependencies In most cases, it's most wise to compose a module out of other modules. In this recipe, we will install a dependency. Getting ready For this recipe, all we need is Command Prompt open in the hsl-to-hex folder from the Scaffolding a module recipe. How to do it… Our hsl-to-hex module can be implemented in two steps: Convert the hue degrees, saturation percentage, and luminosity percentage to corresponding red, green, and blue numbers between 0 and 255. Convert the RGB values to HEX. Before we tear into writing an HSL to the RGB algorithm, we should check whether this problem has already been solved. The easiest way to check is to head to http://npmjs.com and perform a search: Oh, look! Somebody already solved this. After some research, we decide that the hsl-to-rgb-for-reals module is the best fit. Ensuring that we are in the hsl-to-hex folder, we can now install our dependency with the following: npm install --save hsl-to-rgb-for-reals Now let's take a look at the bottom of package.json: tail package.json #linux/osx type package.json #windows Tail output should give us this: "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to-hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" } } We can see that the dependency we installed has been added to a dependencies object in the package.json file. How it works… The top two results of the npm search are hsl-to-rgb and hsl-to-rgb-for-reals . The first result is unusable because the author of the package forgot to export it and is unresponsive to fixing it. The hsl-to-rgb-for-reals module is a fixed version of hsl-to-rgb . This situation serves to illustrate the nature of the npm ecosystem. On the one hand, there are over 200,000 modules and counting, and on the other many of these modules are of low value. Nevertheless, the system is also self-healing in that if a module is broken and not fixed by the original maintainer, a second developer often assumes responsibility and publishes a fixed version of the module. When we run npm install in a folder with a package.json file, a node_modules folder is created (if it doesn't already exist). Then, the package is downloaded from the npm registry and saved into a subdirectory of node_modules (for example, node_modules/hsl-to-rgb-for-reals ). npm 2 vs npm 3 Our installed module doesn't have any dependencies of its own. However, if it did, the sub-dependencies would be installed differently depending on whether we're using version 2 or version 3 of npm. Essentially, npm 2 installs dependencies in a tree structure, for instance, node_modules/dep/node_modules/sub-dep-of-dep/node_modules/sub-dep-of-sub-dep. Conversely, npm 3 follows a maximally flat strategy where sub-dependencies are installed in the top level node_modules folder when possible, for example, node_modules/dep, node_modules/sub-dep-of-dep, and node_modules/sub-dep-of-sub-dep. This results in fewer downloads and less disk space usage; npm 3 resorts to a tree structure in cases where there are two versions of a sub-dependency, which is why it's called a maximally flat strategy. Typically, if we've installed Node 4 or above, we'll be using npm version 3. There's more… Let's explore development dependencies, creating module management scripts and installing global modules without requiring root access. Installing development dependencies We usually need some tooling to assist with development and maintenance of a module or application. The ecosystem is full of programming support modules, from linting to testing to browser bundling to transpilation. In general, we don't want consumers of our module to download dependencies they don't need. Similarly, if we're deploying a system built-in node, we don't want to burden the continuous integration and deployment processes with superfluous, pointless work. So, we separate our dependencies into production and development categories. When we use npm --save install <dep>, we're installing a production module. To install a development dependency, we use --save-dev. Let's go ahead and install a linter. JavaScript Standard Style A standard is a JavaScript linter that enforces an unconfigurable ruleset. The premise of this approach is that we should stop using precious time up on bikeshedding about syntax. All the code in this article uses the standard linter, so we'll install that: npm install --save-dev standard semistandard If the absence of semicolons is abhorrent, we can choose to install semistandard instead of standard at this point. The lint rules match those of standard, with the obvious exception of requiring semicolons. Further, any code written using standard can be reformatted to semistandard using the semistandard-format command tool. Simply, run npm -g i semistandard-format to get started with it. Now, let's take a look at the package.json file: { "name": "hsl-to-hex", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT", "repository": { "type": "git", "url": "git+ssh://git@github.com/davidmarkclements/hsl-to-hex.git" }, "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to- hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" }, "devDependencies": { "standard": "^6.0.8" } } We now have a devDependencies field alongside the dependencies field. When our module is installed as a sub-dependency of another package, only the hsl-to-rgb-for-reals module will be installed while the standard module will be ignored since it's irrelevant to our module's actual implementation. If this package.json file represented a production system, we could run the install step with the --production flag, as shown: npm install --production Alternatively, this can be set in the production environment with the following command: npm config set production true Currently, we can run our linter using the executable installed in the node_modules/.bin folder. Consider this example: ./node_modules/.bin/standard This is ugly and not at all ideal. Refer to Using npm run scripts for a more elegant approach. Using npm run scripts Our package.json file currently has a scripts property that looks like this: "scripts": { "test": "echo "Error: no test specified" && exit 1" }, Let's edit the package.json file and add another field, called lint, as follows: "scripts": { "test": "echo "Error: no test specified" && exit 1", "lint": "standard" }, Now, as long as we have standard installed as a development dependency of our module (refer to Installing Development Dependencies), we can run the following command to run a lint check on our code: npm run-script lint This can be shortened to the following: npm run lint When we run an npm script, the current directory's node_modules/.bin folder is appended to the execution context's PATH environment variable. This means even if we don't have the standard executable in our usual system PATH, we can reference it in an npm script as if it was in our PATH. Some consider lint checks to be a precursor to tests. Let's alter the scripts.test field, as illustrated: "scripts": { "test": "npm run lint", "lint": "standard" }, Chaining commands Later, we can append other commands to the test script using the double ampersand (&&) to run a chain of checks. For instance, "test": "npm run lint && tap test". Now, let's run the test script: npm run test Since the test script is special, we can simply run this: npm test Eliminating the need for sudo The npm executable can install both the local and global modules. Global modules are mostly installed so to allow command line utilities to be used system wide. On OS X and Linux, the default npm setup requires sudo access to install a module. For example, the following will fail on a typical OS X or Linux system with the default npm setup: npm -g install cute-stack # <-- oh oh needs sudo This is unsuitable for several reasons. Forgetting to use sudo becomes frustrating; we're trusting npm with root access and accidentally using sudo for a local install causes permission problems (particularly with the npm local cache). The prefix setting stores the location for globally installed modules; we can view this with the following: npm config get prefix Usually, the output will be /usr/local . To avoid the use of sudo, all we have to do is set ownership permissions on any subfolders in /usr/local used by npm: sudo chown -R $(whoami) $(npm config get prefix)/{lib/node_modules,bin,share} Now we can install global modules without root access: npm -g install cute-stack # <-- now works without sudo If changing ownership of system folders isn't feasible, we can use a second approach, which involves changing the prefix setting to a folder in our home path: mkdir ~/npm-global npm config set prefix ~/npm-global We'll also need to set our PATH: export PATH=$PATH:~/npm-global/bin source ~/.profile The source essentially refreshes the Terminal environment to reflect the changes we've made. See also Scaffolding a module Writing module code Publishing a module Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [article] Working with Pluginlib, Nodelets, and Gazebo Plugins [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 14417

article-image-www-turns-30-tim-berners-lee-its-inventor-shares-his-plan-to-save-the-web-from-its-current-dysfunctions
Bhagyashree R
13 Mar 2019
6 min read
Save for later

WWW turns 30: Tim Berners-Lee, its inventor, shares his plan to save the Web from its current dysfunctions

Bhagyashree R
13 Mar 2019
6 min read
The World Wide Web turned 30 years old yesterday. As a part of the celebration, its creator, Tim Berners-Lee published an open letter on Monday, sharing his vision for the future of the web. In this year’s letter, he also expressed his concerns about the direction in which the web is heading and how we can make it as the one he envisioned. To celebrate #Web30, Tim Berners-Lee is on a 30-hour trip and his first stop was the birthplace of WWW, the European Organization for Nuclear Research, CERN. https://twitter.com/timberners_lee/status/1105400740112203777 Back in 1989, Tim Berners-Lee, as a research fellow at the CERN researching lab, wrote a proposal to his boss titled Information Management: A Proposal. This proposal was for building an information system that would allow researchers to share general information about accelerators and experiments. Initially, he named the project “The Mesh”, which combined hypertext with internet TCP and domain name system. The project did not go that well, but Berners-Lee’s boss, Mike Sendall did remark that the idea is “vague but exciting”. Later on, in 1990, he actually started coding for the project and this time he named the project, what we know today as, the World Wide Web. Fast forward to now, the simple innocent system that he built has become so large, connecting millions and millions of people across the globe. If you are curious to know how WWW looked back then, check out its revived version by a CERN team: https://twitter.com/CERN/status/1105457772626358273 The three dysfunctions the Web is now facing World Wide Web has come a long way. It has opened various opportunities, given voice to marginalized groups, and has made our daily lives much convenient and easier. At the same time, it has also given opportunities to scammers, provided a platform for hate speech, and made it extremely easy for committing crimes while sitting behind a computer screen. Berners-Lee listed down three sources of problems that are affecting today’s web and also suggested a few ways we can minimize or prevent them: “Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behavior, and online harassment.” Though it is really not possible to completely eliminate this dysfunction, policymakers can come up with laws and developers can take the responsibility to write code that will help minimize this behavior. “System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.” These type of systems introduces the wrong ways of rewarding that encourage others to sacrifice the user’s interests. To prevent this problem developers need to rethink the incentives and accordingly redesign the systems so that they are not promoting these wrong behaviors. “Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.” These are the systems that are created thoughtfully and with good intent but still result in negative outcomes. Actually, the problem is that it is really difficult to tell what are all the outcomes of the system you are building. Berners-Lee in an interview with The Guardian said, “Given there are more web pages than there are neurons in your brain, it’s a complicated thing. You build Reddit, and people on it behave in a particular way. For a while, they all behave in a very positive, constructive way. And then you find a subreddit in which they behave in a nasty way.” This problem could be eliminated by researching and understanding of existing systems. Based on this research, we can then model possible new systems or enhance those we already have. Contract for the Web Berners Lee further explained that we can’t just really put the blame on the government or a social network for all the loopholes and dysfunctions that are affecting the Web. He said, “You can’t generalise. You can’t say, you know, social networks tend to be bad, tend to be nasty.” We need to find the root causes and to do exactly that we all need to come together as a global web community. “As the web reshapes, we are responsible for making sure that it is seen as a human right and is built for the public good”, he wrote in the open letter. To address these problems, Berners-Lee has a radical solution. Back in November last year at the Web Summit, he, with The Web Foundation, introduced Contract for the Web. The contract aims to bring together governments, companies, and citizens who believe that there is a need for setting clear norms, laws, and standards that underpin the web. “Governments, companies, and citizens are all contributing, and we aim to have a result later this year,” he shared. In theory, the contract defines people’s online rights and lists the key principles and duties government, companies, and citizens should follow. In Berners-Lee’s mind, it will restore some degree of equilibrium and transparency to the digital realm. The contract is part of a broader project that Berners-Lee believes is essential if we are to ‘save’ the web from its current problems. First, we need to create an open web for the users who are already connected to the web and give them the power of fixing issues that we have with the existing web. Secondly, we need to bring the other half of the world, which is not yet connected to the web. Many people are agreeing on the points Berners-Lee discussed in the open letter. Here is what some of the Twitter users are saying: https://twitter.com/girlygeekdom/status/1105375206829256704 https://twitter.com/solutionpoint/status/1105366111678279681 Contract for the Web, as Berners-Lee says, is about “going back to the values”. His idea of bringing together governments, companies, and citizens to make the Web safer and accessible to everyone looks pretty solid. Read the full open letter by Tim Berners-Lee on the Web Foundation’s website. Web Summit 2018: day 2 highlights Tim Berners-Lee is on a mission to save the web he invented UN on Web Summit 2018: How we can create a safe and beneficial digital future for all  
Read more
  • 0
  • 0
  • 14413
Modal Close icon
Modal Close icon