Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-installing-your-first-application
Packt
23 Dec 2016
9 min read
Save for later

Installing Your First Application

Packt
23 Dec 2016
9 min read
 In this article by Greg Moss, author of the book Working with Odoo 9 - Second Edition, we will learn about the various applications that Odoo has to offer and how you can install Odoo on your own system. Before the release of Odoo 8, most users were focused on ERP and financial-related applications. Now, Odoo has added several important applications that allow companies to use Odoo in much greater scope than ever before. For example, the website builder can be installed to quickly launch a simple website for your business. A task that typically would have been accomplished with a content management system such as Wordpress. Despite all the increasing options available in Odoo, the overall process is the same. We begin by looking at the overall business requirements and decide on the first set of applications that we wish to implement. After understanding our basic objectives, we will create an Odoo database and configure the required company information. Next, we begin exploring the Odoo interface for creating and viewing information. We will see just how easy Odoo is to use by completing an entire sales order workflow. In this article we will cover following topics: Gathering requirements Creating a new database in Odoo Knowing the basic Odoo interface (For more resources related to this topic, see here.) Gathering requirements Setting up an Odoo system is no easy task. Many companies get into trouble believing that they can just install the software and throw in some data. Inevitably, the scope of the project grows and what was supposed to be a simple system ends up a confusing mess. Fortunately, Odoo's modular design will allow you to take a systematic approach to implementing Odoo for your business. Implementing Odoo using a modular approach The bare bones installation of Odoo simply provides you a limited messaging system. To manage your Odoo implementation, you must begin with the planning of the modules with which you will work first. Odoo allows you to install just what you need now and then install additional Odoo modules as you better define your requirements. It can be valuable to take this approach when you are considering how you will implement Odoo for your own business. Don't try and install all the modules and get everything running all at once. Instead, break down the implementation into smaller phases. Introducing Silkworm – our real-world case study To best understand how to work with Odoo, we will build our exercises around a real-world case study. Silkworm is a mid-sized screen printer that manufactures and sells t-shirts as well as a variety of printing projects. Using Odoo's modular design, we will begin by implementing the Sales Order module to set up the selling of basic products. In this specific case, we will be selling t-shirts. As we proceed through this book, we will continue to expand the system by installing additional modules. When implementing Odoo for your organization, you will also want to create a basic requirements document. This information is important for configuration of the company settings in Odoo, and should be considered essential documentation when implementing an ERP system. Creating a new database in Odoo If you have installed Odoo on your own server you will need to first create a database. As you add additional applications to Odoo, the necessary tables and fields will be added to the database you specify. Odoo Online: If you are using Odoo Online, you will not have access to create a new database and instead will use Odoo's one click application installer to manage your Odoo installation. Skip to XXX if you are using an online Odoo installation. If you have just installed a fresh copy of Odoo, you will be prompted automatically to create a new Odoo database. In the preceding screenshot, you can see the Odoo form to Create Database. Odoo provides basic instructions for creating your database. Let us quickly review the fields and how they are used. Selecting a database name When selecting a database name, choose a name that describes the system and that will make clear the purpose of the database. There are a few rules for creating an Odoo database: Your database name cannot contain spaces and must start with a number or letter Also you will need to avoid commas, periods, and quotes Underscores and hyphens are allowed if they are not the first character in the name It can also be a good idea to specify in the name if the database is for development, testing, or production purposes. For the purposes of our real-world case study, we will use the database name: SILKWORM-DEV We have chosen the -DEV suffix as we will consider this a development database that will not be used for production or even for testing. Take the time to consider what you will name your databases. It can be useful to have standard prefixes or suffixes depending on the purpose of your database. For example, you may use -PROD for your production database or -TEST for the database that you are using for testing. Loading demonstration data Notice the box reading Check this box to evaluate Odoo. If you mark this checkbox when you create a database, Odoo will preload your tables with a host of sample data for each module that is installed. This may include fake customers, suppliers, sales orders, invoices, inbox messages, stock moves, and products. The purpose of the demonstration data is to allow you to run modules through their paces without having to key in a ton of test data. For the purposes of our real-world case study in this book, do not load demonstration data. Specifying our default language Odoo offers a variety of language translation features with support for more than twenty languages. All of the examples in this book will use the English (US) language option. Be aware that depending on the language you select in Odoo, you may need to have that language also installed in your base operating system. Choosing a password Each Odoo database is created with an administrator account named admin. This is also known as the superuser account. The password you choose during the creation of the database will be the password for the admin account. Choose any password you wish and click on Create Database to create the SILKWORM-DEV database. Managing databases in Odoo The database management interface allows you to perform basic database management tasks such as backing up or restoring a database. Often with Odoo, it is possible to manage your databases without ever having to go directly into the Postgre database server. It is also possible to set up multiple databases under the same installation of Odoo. For instance, you may want in the future to install another database that does load demonstration data and may be used to install modules simply for testing purposes. If you have trouble getting to the interface to manage databases you can access the database management interface directly by going to the /web/database/manager path. Installing the Sales Management module After clicking on Create Database, it can take a little time depending on your system before you are shown a page that lists the available applications. This screen lets you select from a list of the most common Odoo modules to install. There is very little you can do with just an Odoo database with no modules installed. Now we will install the Sales Management module so we can begin setting up our business selling t-shirts. Click on the Install button to install the Sales Management module. During installation of modules and other long operations, you will often see a Loading icon at the top center of your screen. Unlike previous versions of Odoo that prompted for accounting and other setup information, Odoo now completes the installation unattended. Knowing the basic Odoo interface After the installation of the sales order application, Odoo takes you directly to the Sales dashboard. As we have just installed the application, there is very little to see in the dashboard, but we can see the available menu options along the left edge of the interface. The menus along the top allow you to change between the major applications and settings within Odoo, while the menus down the left side outline all your available choices. In the following screenshot, we are in the main Sales menu. Let's look at one of the main master files that we will be using in many Odoo applications, the Customers. Click the Customers menu on the left. Let's take a moment to look at the screen elements that will appear consistently throughout Odoo. In the top left of the main form, you can clearly see that we are in the Customers section. Using the search box In the top right corner of our form we have a search box: The search box allows you to quickly search for records in the Odoo application. If you are in the Customers section, naturally the search will be looking for customer records. Likewise, if you are looking at the product view, the search box will allow you to search the product records that you have entered into the system. Picking different views Odoo also offers a standard interface to switch between a list view, form view, or other views such as Kanban or graph views. You can see the icon selections under the search box in the right corner of the form: The currently selected view is highlighted in dark. If you move the mouse over the icon you will get a tool-tip that shows you the description of the view. As we have no records in our system currently, let us add a record so we can further explore the Odoo interface. Summary In this article, we have started by creating an Odoo database. We then installed the Sales Order Management module and learned our basic Odoo interface. Resources for Article: Further resources on this subject: Web Server Development [article] Getting Started with Odoo Development [article] Introduction to Odoo [article]
Read more
  • 0
  • 0
  • 1577

article-image-implementing-rethinkdb-query-language
Packt
23 Dec 2016
5 min read
Save for later

Implementing RethinkDB Query Language

Packt
23 Dec 2016
5 min read
In this article by Shahid Shaikh, the author of the book Mastering RethinkDB, we will cover how you will perform geospatial queries (such as finding all the documents with locations within 5km of a given point). (For more resources related to this topic, see here.) Performing MapReduce operations MapReduce is the programming model to perform operations (mainly aggregation) on distributed set of data across various clusters in different servers. This concept was coined by Google and been used in Google file system initially and later been adopted in open source Hadoop project. MapReduce works by processing the data on each server and then combine it together to form a result set. It actually divides into two operations namely map and reduce. Map: It performs the transformation of the elements in the group or individual sequence Reduce: It performs the aggregation and combine results from Map into meaningful result set In RethinkDB, MapReduce queries operate in three steps: Group operation: To process the data into groups. This step is optional Map operation: To transform the data or group of data into sequence Reduce operation: To aggregate the sequence data to form resultset So mainly it is Group Map Reduce (GMR) operation. RethinkDB spread the mapreduce query across various clusters in order to improve efficiency. There is specific command to perform this GMR operation; however RethinkDB already integrated them internally to some aggregate functions in order to simplify the process. Let us perform some aggregation operation in RethinkDB. Grouping the data To group the data on basis of field we can use group() ReQL function. Here is sample query on our users table to group the data on the basis of name: rethinkdb.table("users").group("name").run(connection,function(err,cursor) { if(err) { throw new Error(err); } cursor.toArray(function(err,data) { console.log(JSON.stringify(data)); }); }); Here is the output for the same: [ { "group":"John", "reduction":[ { "age":24, "id":"664fced5-c7d3-4f75-8086-7d6b6171dedb", "name":"John" }, { "address":{ "address1":"suite 300", "address2":"Broadway", "map":{ "latitude":"116.4194W", "longitude":"38.8026N" }, "state":"Navada", "street":"51/A" }, "age":24, "id":"f6f1f0ce-32dd-4bc6-885d-97fe07310845", "name":"John" } ] }, { "group":"Mary", "reduction":[ { "age":32, "id":"c8e12a8c-a717-4d3a-a057-dc90caa7cfcb", "name":"Mary" } ] }, { "group":"Michael", "reduction":[ { "age":28, "id":"4228f95d-8ee4-4cbd-a4a7-a503648d2170", "name":"Michael" } ] } ] If you observe the query response, data is group by the name and each group is associated with document. Every matching data for the group resides under reductionarray. In order to work on each reductionarray, you can use ungroup() ReQL function which in turns takes grouped streams of data and convert it into array of object. It's useful to perform the operations such as sorting and so on, on grouped values. Counting the data We can count the number of documents present in the table or a sub document of a document using count() method. Here is simple example: rethinkdb.table("users").count().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); });  It should return the number of documents present in the table. You can also use it count the sub document by nesting the fields and running count() function at the end. Sum We can perform the addition of the sequence of data. If value is passed as an expression then sums it up else searches in the field provided in the query. For example, find out total number of ages of users: rethinkdb.table("users")("age").sum().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); });  You can of course use an expression to perform math operation like this: rethinkdb.expr([1,3,4,8]).sum().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); });  Should return 16. Avg Performs the average of the given number or searches for the value provided as field in query. For example: rethinkdb.expr([1,3,4,8]).avg().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); }); Min and Max Finds out the maximum and minimum number provided as an expression or as field. For example, find out the oldest users in database: rethinkdb.table("users")("age").max().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); }); Same way of finding out the youngest user: rethinkdb.table("users")("age").min().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); }); Distinct Distinct finds and removes the duplicate element from the sequence, just like SQL one. For example, find user with unique name: rethinkdb.table("users")("name").distinct().run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); });   It should return an array containing the names: [ 'John', 'Mary', 'Michael' ] Contains Contains look for the value in the field and if found return boolean response, true in case if it contains the value, false otherwise. For example, find the user whose name contains John. rethinkdb.table("users")("name").contains("John").run(connection,function(err,data) { if(err) { throw new Error(err); } console.log(data); }); Should return true. Map and reduce Aggregate functions such as count(), sum() already makes use of map and reduce internally, if required then group() too. You can of course use them explicitly in order to perform various functions. Summary From this article we learned various RethinkDB query language as it will help the readers to know much more basic concept of RethinkDB. Resources for Article: Further resources on this subject: Introducing RethinkDB [article] Amazon DynamoDB - Modelling relationships, Error handling [article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 1982

article-image-introducing-powershell-remoting
Packt
21 Dec 2016
9 min read
Save for later

Introducing PowerShell Remoting

Packt
21 Dec 2016
9 min read
In this article by Sherif Talaat, the author of the book PowerShell 5.0 Advanced Administration Handbook, we will see the PowerShell v2 introduced a powerful new technology, PowerShell remoting, which was refined andexpanded upon for later versions of PowerShell. PowerShell remoting is based primarily upon standardized protocols andtechniques; it is possibly one of the most important aspects of Windows PowerShell. Today, a lot of Microsoft products rely upon it almost entirely for administrative communications across the network. The most important and exciting characteristic of PowerShell is itsremote management capability. PowerShell Remoting can control the target remote computer via the network. It uses Windows Remote Management (WinRM) which is based on Microsoft’s WS-Management protocol. Using PowerShell remoting, the administrator can execute various management operations on dozens of target computers across the network. In this article, we will cover the following topics: PowerShell remoting system requirements Enabling/Disabling remoting Executing remote commands Interactive remoting Sessions Saving remote sessions to disk Understanding session configuration (For more resources related to this topic, see here.) Windows PowerShellremoting It’s very simple: Windows PowerShell remoting is developed to help you ease your administration tasks. The idea is about using the PowerShell console on your local machine to manage and control remote computers in different locations, whether these locations are on a local network, a branch, or even in the cloud. Windows PowerShell remoting relies on Windows Remote Management (WinRM) to connect those computers together even if they’re not physically connected. Sounds cool and exciting, huh?! Windows Remote Management (WinRM) is a Microsoft implementation for the WS-Management protocol. WSMan is a standard Simple Object Access Protocol (SOAP) that allows hardware and operating systems, from different vendors, to interoperate and communicate together in order to access and exchange management information across the entire infrastructure. In order to be able to execute a PowerShell script on remote computers using PowerShell remoting, the user performing this remote execution must meet one of the following conditions: Be a member of the administrators’ group on the remote machine whether as a domain administrator or a local administrator Provide admin privileged credentials at the time of execution, either while establishing the remote session or using a –ComputerName parameter Has access to the PowerShell session configuration on the remote computer Now we understand what PowerShell remoting is, let’s jump to the interesting stuff and start playing with it. Enable/Disable PowerShell Remoting Before using Windows PowerShell remoting, we need to first ensure that it’s already on the computers we want to connect to and manage. You can validate whether PowerShell remoting is enabled on a computer using the Test-WSMan cmdlet. #Verify WinRM service status Test-WSMan –ComputerName Server02 If PowerShell remoting is enabled on the remote computer, (which means that the WinRM service is running), you will get an acknowledgement message similar to the message shown in the following screenshot: However, if the WinRM is not responding, either because it’s not enabled or because the computer is unreachable, you will get an error message similar to the message shown in the following screenshot: Okay, at this stage, we know which computers have remoting enabled and which need to be configured. In order to enable PowerShell remoting on a computer, we use the Enable-PSRemoting cmdlet. The Enable-PSRemoting cmdlet will prompt you with a message to inform you about the changes to be applied on the target computer and ask for your confirmation as shown in the following screenshot: You can skip this prompt message by using the –Forceparameter: #Enable PowerShell Remoting Enable-PSRemoting –Force In client OS versions of Windows, such as Windows 7, 8/8.1, and 10, the network connection type must be set either to domain or private. If it’s set to public, you will get a message as shown in the following: This is the Enable-PSRemoting cmdlet’s default behavior to stop you from enabling PowerShell remoting on a public network which might put your computer in risk. You can skip the network profile check using the –SkipNetworkProfileCheck parameter, or simply change the network profile as shown later in this article: #Enable PowerShell Remoting on Public Network Enable-PSRemoting –Force –SkipNetworkProfileCheck  If, for any reason, you want to temporarily disable a session configuration in order to prevent users from connecting to a local computer using that session configuration, you can use theDisable-PSSessionConfiguration cmdlet along with the -Name parameter to specify which session configuration you want to disable. If we don’t specify a configuration name for the –Name parameter, the default session configurationMicrosoft.PowerShell will be disabled. Later on, if you want to re-enable the session configuration, you can use the Enable-PSSessionConfiguration cmdlet with the -Name parameter to specify which session configuration you need to enable, similarly to the Disable-PSSessionConfiguration cmdlet. Delete a session configuration When you disable a session configuration, PowerShell just denies access to this session configuration by assigning deny all to the defined security descriptors. It doesn’t remove it, which is why you can re-enable it. If you want to permanently remove a session configuration, use theUnregister-PSSessionConfiguration cmdlet. Windows PowerShell Web Access (PSWA) Windows PowerShell Web Access (PSWA) was introduced for the first time as a new feature in Windows PowerShell 3.0.Yes, it is what you are guessing it is! PowerShell Web Access is a web-based version of the PowerShell console that allows you to run and execute PowerShell cmdlets and scripts from any web browser on any desktop, notebook, smartphone, or tablet that meet the following criteria: Allows cookies from the Windows PowerShell Web Access gateway website Capable of opening and reading HTTPS pages Opens and runs websites that use JavaScript PowerShell Web Access allows you to complete your administration tasks smoothly anywhere anytime using any device running a web browser, regardless of whether it is Microsoft or non-Microsoft. Installing and configuring Windows PowerShell Web Access The following are the steps to install and configure Windows PowerShell Web Access: Step 1: Installing the Windows PowerShell Web Access Windows feature In this step we will install the Windows PowerShell Web Access Windows feature. For the purpose of this task, we will use the Install-WindowsFeature cmdlet: #Installing PSWA feature Install-WindowsFeature WindowsPowerShellWebAccess –IncludeAllSubFeature –IncludeManagementTools The following screenshot depicts the output of the cmdlet: Now, we have the PowerShell Web Access feature installed. The next step is to configure it. Step 2: Configuring Windows PowerShell Web Access gateway To configure the PSWA gateway, we use the Install-PswaWebApplication cmdlet which will create an IIS Web Application that runs PowerShell Web Access and configures the SSL certificate. If you don’t have the SSL certificate, then you can use the –UseTestCertificate flag in order to generate and use a self-signed certificate: #Configure PWSA Gateway Install-PswaWebApplication –WebSiteName “Default Web Site“ –WebApplicationName “PSWA“ –UseTestCertificate   Use the–UseTestCertificate for testing purposes in your private lab only. Never use it in a production environment. In your production environments use a certificate from a trusted certificate issued by either your corporate’s Certificate Authority (CA) or a trusted certificate publisher. To verify successful installation and configuration of the gateway. Browse the PSWA URLhttps://<server_name>/PSWA as shown in the following screnshot: The PSWA WebApplication files are located at %windir%WebPowerShellWebAccesswwwroot. Step 3: Configuring PowerShell Web Access authorization rules Now, we have PSWA up and running. However no one will be able to sign-in and use it yet until we create the appropriate authorization rule. Because PSWA could be accessed from anywhere any time –which increases the security risks – PowerShell restricts any access to your network until you create and assign the right access to the right person. The authorization rule is the access control for your PSWA that adds an additional security layer to your PSWA. It is similar to the access list on the firewall and network devices. To create a new access authorization rule, we use the Add-PswaAuthorizationRule cmdlet along with the–UserName parameter to specify the name of the user who will get the access; the–-ComputerName parameter to specify which computer the user will has access to; and the–ConfigurationName parameter to specify the session configuration available to this user: #Adding PSWA Access Authorization RUle Add-PswaAuthorizationRule –UserName PSWAAdministrator –ComputerName PSWA –ConfigurationName Microsoft.PowerShell   The PSWA Authorization Rules files are located at %windir%WebPowerShellWebAccessdataAuthorizationRules.xml There are four different access authorization rules scenarios that we can enable on PowerShell Web Access. These scenarios are: Enable single user access to a single computer: For this scenario we use the –Username parameter to specify the single user, and the–ComputerName parameter to specify the single computer Enable single user access to a group of computers: For this scenario we use the –Username parameter to specify the single user, and the–ComputeGroupNameComputeGroupName parameter to specify the name of the active directory computer group Enable a group of users access to a single computer: For this scenario we use the –UserGroupName parameter to specify the name of active directory users’ group, and the–ComputerName parameter to specify the individual computer Enable groups of users access to a group of computers: For this scenario we use the –UserGroupName parameter to specify the name of active directory users group, and the –ComputeGroupNameComputeGroupName parameter to specify the name of the active directory computer group You can use the Get-PswaAuthorizationRule cmdlet to list all the configured access authorization rules, and use the Remove-PswaAuthorizationRule cmdlet to remove them. Sign-in to PowerShell Web Access Now, let’s verify the installation and start using the PSWA by signing-in to it: Open the Internet browser; you can choose whichever browser you like bearing in mind the browser requirements mentioned earlier. Enter https://<server_name>/PSWA. Enter User Name, Password, Connection Type, and Computer Name. Summary In this article, we learned about one of the most powerful features of PowerShell which is PowerShell remoting, including how to enable, prepare, and configure your environment to use PowerShell remoting. Moreover, we demonstrated some examples of how to use different methods to utilize this remote capability. We learned how to run remote commands on remote computers by using a temporary or persistent connection. Finally, we closed the article with PowerShell Web Access, including how it works and how to configure it. Resources for Article: Further resources on this subject: Installing/upgrading PowerShell [article] DevOps Tools and Technologies [article] Bringing DevOps to Network Operations [article]
Read more
  • 0
  • 0
  • 71804

article-image-say-hi-tableau
Packt
21 Dec 2016
9 min read
Save for later

Say Hi to Tableau

Packt
21 Dec 2016
9 min read
In this article by Shweta Savale, the author of the book Tableau Cookbook- Recipes for Data Visualization, we will cover how you need to install My Tableau Repository and connecting to the sample data source. (For more resources related to this topic, see here.) Introduction to My Tableau Repository and connecting to the sample data source Tableau is a very versatile tool and it is used across various industries, businesses, and organizations, such as government and non-profit organizations, BFSI sector, consulting, construction, education, healthcare, manufacturing, retail, FMCG, software and technology, telecommunications, and many more. The good thing about Tableau is that it is industry and business vertical agnostic, and hence as long as we have data, we can analyze and visualize it. Tableau can connect to a wide variety of data sources and many of the data sources are implemented as native connections in Tableau. This ensures that the connections are as robust as possible. In order to view the comprehensive list of data sources that Tableau connects to, we can visit the technical specification page on the Tableau website by clicking on the following link: http://www.tableau.com/products/desktop?qt-product_tableau_desktop=1#qt-product_tableau_desktops. Getting ready Tableau provides some sample datasets with the Desktop edition. In this article, we will frequently be using the sample datasets that have been provided by Tableau. We can find these datasets in the Data sources directory in the My Tableau Repository folder, which gets created in our Documents folder when Tableau Desktop is installed on our machine. We can look for these data sources in the repository or we can quickly download it from the link mentioned and save it in a new folder called Tableau Cookbook data under Documents/My Tableau Repository/Datasources. The link for downloading the sample datasets is as follows: https://1drv.ms/f/s!Av5QCoyLTBpngihFyZaH55JpI5BN There are two files that have been uploaded. They are as follows: Microsoft Excel data called Sample - Superstore.xls Microsoft Access data called Sample - Coffee Chain.mdb In the following section, we will see how to connect to the sample data source. We will be connecting to the Excel data called Sample - Superstore.xls. This Excel file contains transactional data for a retail store. There are three worksheets in this Excel workbook. The first sheet, which is called the Orders sheet, contains the transaction details; the Returns sheet contains the status of returned orders, and the People sheet contains the region names and the names of managers associated with those regions. Refer to the following screenshot to get a glimpse of how the Excel data is structured: Now that we have taken a look at the Excel data, let us see how to connect to this Excel data in the following recipe. To begin with, we will work on the Orders sheet of the Sample - Superstore.xls data. This worksheet contains the order details in terms of the products purchased, the name of the customer, Sales, Profits, Discounts offered, day of purchase, order shipment date, among many other transactional details. How to do it… Let’s open Tableau Desktop by double-clicking on the Tableau 10.0 icon on our Desktop. We can also right-click on the icon and select Open. We will see the start page of Tableau, as shown in the following screenshot: We will select the Excel option from under the Connect header on the left-hand side of the screen. Once we do that, we will have to browse the Excel file called Sample - Superstore.xls, which is saved in Documents/My Tableau Repository/Datasources/Tableau Cookbook data. Once we are able to establish a connection to the referred Excel file, we will get a view as shown in the following screenshot: Annotation 1 in the preceding screenshot is the data that we have connected to, and annotation 2 is the list of worksheets/tables/views in our data. Double-click on the Orders sheet or drag and drop the Orders sheet from the left-hand side section into the blank space that says Drag sheets here. Refer to annotation 3 in the preceding screenshot. Once we have selected the Orders sheet, we will get to see the preview of our data, as highlighted in annotation 1 in the following screenshot. We will see the column headers, their data type (#, Abc, and so on), and the individual rows of data: While connecting to a data source, we can also read data from multiple tables/sheets from that data source. However, this is something that we will explore a little later. Further moving ahead, we will need to specify what type of connection we wish to maintain with the data source. Do we wish to connect to our data directly and maintain a Live connectivity with it, or do we wish to import the data into Tableau's data engine by creating an Extract? Refer to annotation 2 in the preceding screenshot. We will understand these options in detail in the next section. However, to begin with, we will select the Live option. Next, in order to get to our Tableau workspace where we can start building our visualizations, we will click on the Go to Worksheet option/ Sheet 1. Refer to annotation 3 in the preceding screenshot. This is how we can connect to data in Tableau. In case we have a database to connect to, then we can select the relevant data source from the list and fill in the necessary information in terms of server name, username, password, and so on. Refer to the following screenshot to see what options we get when we connect to Microsoft SQL Server: How it works… Before we connect to any data, we need to make sure that our data is clean and in the right format. The Excel file that we connected to was stored in a tabular format where the first row of the sheet contains all the column headers and every other row is basically a single transaction in the data. This is the ideal data structure for making the best use of Tableau. Typically, when we connect to databases, we would get columnar/tabular data. However, flat files such as Excel can have data even in cross-tab formats. Although Tableau can read cross-tab data, we may face certain limitations in terms of options for viewing, aggregating, and slicing and dicing our data in Tableau. Having said that, there may be situations where we have to deal with such cross-tab or pre-formatted Excel files. These files will essentially need cleaning up before we pull into Tableau. Refer to the following article to understand more about how we can clean up these files and make them Tableau ready: http://onlinehelp.tableau.com/current/pro/desktop/en-us/help.htm#data_tips.html In case it is a cross-tab file, then we will have to pivot it into normalized columns either at the data level or on the fly at Tableau level. We can do so by selecting multiple columns that we wish to pivot and then selecting the Pivot option from the dropdown that appears when we hover over any of the columns. Refer to the following screenshot: If the format of the data in our Excel file is not suitable for analysis in Tableau, then we can turn on the Data Interpreter option, which becomes available when Tableau detects any unique formatting or any extra information in our Excel file. For example, the Excel data may include some empty rows and columns, or extra headers and footers. Refer to the following screenshot: Data Interpreter can remove that extra information to help prepare our Tableau data source for analysis. Refer to the following screenshot: When we enable the Data Interpreter, the preceding view will change to what is shown in the following screenshot: This is how the Data Interpreter works in Tableau. Now many a times, there may also be situations where our data fields are compounded or clubbed in a single column. Refer to the following screenshot: In the preceding screenshot, the highlighted column is basically a concatenated field that has the Country, City, and State. For our analysis, we may want to break these and analyze each geographic level separately. In order to do so, we simply need to use the Split or Custom Split…option in Tableau. Refer to the following screenshot: Once we do that, our view would be as shown in the following screenshot: When preparing data for analysis, at times a list of fields may be easy to consume as against the preview of our data. The Metadata grid in Tableau allows us to do the same along with many other quick functions such as renaming fields, hiding columns, changing data types, changing aliases, creating calculations, splitting fields, merging fields, and also pivoting the data. Refer to the following screenshot: After having established the initial connectivity by pointing to the right data source, we need to specify as to how we wish to maintain that connectivity. We can choose between the Live option and Extract option. The Live option helps us connect to our data directly and maintains a live connection with the data source. Using this option allows Tableau to leverage the capabilities of our data source and in this case, the speed of our data source will determine the performance of our analysis. The Extract option on the other hand, helps us import the entire data source into Tableau's fast data engine as an extract. This option basically creates a .tde file, which stands for Tableau Data Extract. In case we wish to extract only a subset of our data, then we can select the Edit option, as highlighted in the following screenshot. The Add link in the right corner helps us add filters while fetching the data into Tableau. Refer to the following screenshot: A point to remember about Extract is that it is a snapshot of our data stored in a Tableau proprietary format and as opposed to a Live connection, the changes in the original data won't be reflected in our dashboard unless and until the extract is updated. Please note that we will have to decide between Live and Extract on a case to case basis. Please refer to the following article for more clarity: http://www.tableausoftware.com/learn/whitepapers/memory-or-live-data Summary This article thus helps us to install and connect to sample data sources which is very helpful to create effective dashboards in business environment for statistical purpose. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Data Modelling Challenges [article] Creating your first heat map in R [article]
Read more
  • 0
  • 0
  • 3225

article-image-finishing-attack-report-and-withdraw
Packt
21 Dec 2016
11 min read
Save for later

Finishing the Attack: Report and Withdraw

Packt
21 Dec 2016
11 min read
In this article by Michael McPhee and Jason Beltrame, the author of the book Penetration Testing with Raspberry Pi - Second Edition we will look at the final stage of the Penetration Testing Kill Chain, which is Reporting and Withdrawing. Some may argue the validity and importance of this step, since much of the hard-hitting effort and impact. But, without properly cleaning up and covering our tracks, we can leave little breadcrumbs with can notify others to where we have been and also what we have done. This article covers the following topics: Covering our tracks Masking our network footprint (For more resources related to this topic, see here.) Covering our tracks One of the key tasks in which penetration testers as well as criminals tend to fail is cleaning up after they breach a system. Forensic evidence can be anything from the digital network footprint (the IP address, type of network traffic seen on the wire, and so on) to the logs on a compromised endpoint. There is also evidence on the used tools, such as those used when using a Raspberry Pi to do something malicious. An example is running more ~/.bash_history on a Raspberry Pi to see the entire history of the commands that were used. The good news for Raspberry Pi hackers is that they don't have to worry about storage elements such as ROM since the only storage to consider is the microSD card. This means attackers just need to re-flash the microSD card to erase evidence that the Raspberry Pi was used. Before doing that, let's work our way through the clean up process starting from the compromised system to the last step of reimaging our Raspberry Pi. Wiping logs The first step we should perform to cover our tracks is clean any event logs from the compromised system that we accessed. For Windows systems, we can use a tool within Metasploit called Clearev that does this for us in an automated fashion. Clearev is designed to access a Windows system and wipe the logs. An overzealous administrator might notice the changes when we clean the logs. However, most administrators will never notice the changes. Also, since the logs are wiped, the worst that could happen is that an administrator might identify that their systems have been breached, but the logs containing our access information would have been removed. Clearev comes with the Metasploit arsenal. To use clearev once we have breached a Windows system with a Meterpreter, type meterpreter > clearev. There is no further configuration, which means clearev just wipes the logs upon execution. The following screenshot shows what that will look like: Here is an example of the logs before they are wiped on a Windows system: Another way to wipe off logs from a compromised Windows system is by installing a Windows log cleaning program. There are many options available to download, such as ClearLogs found at http://ntsecurity.nu/toolbox/clearlogs/. Programs such as these are simple to use, we can just install and run it on a target once we are finished with our penetration test. We can also just delete the logs manually using the C: del %WINDR%* .log /a/s/q/f command. This command directs all logs using /a including subfolders /s, disables any queries so we don't get prompted, and /f forces this action. Whichever program you use, make sure to delete the executable file once the log files are removed so that the file isn't identified during a future forensic investigation. For Linux systems, we need to get access to the /var/log folder to find the log files. Once we have access to the log files, we can simply open them and remove all entries. The following screenshot shows an example of our Raspberry Pi's log folder: We can just delete the files using the remove command, rm, such as rm FILE.txt or delete the entire folder; however, this wouldn't be as stealthy as wiping existing files clean of your footprint. Another option is in Bash. We can simply type > /path/to/file to empty the contents of a file, without removing it necessarily. This approach has some stealth benefits. Kali Linux does not have a GUI-based text editor, so one easy-to-use tool that we can install is gedit. We'll use apt-get install gedit to download it. Once installed, we can find gedit under the application dropdown or just type gedit in the terminal window. As we can see from the following screenshot, it looks like many common text file editors. Let's click on File and select files from the /var/log folder to modify them. We also need to erase the command history since the Bash shell saves the last 500 commands. This forensic evidence can be accessed by typing the more ~/.bash_history command. The following screenshot shows the first of the hundreds of commands we recently ran on my Raspberry Pi: To verify the number of stored commands in the history file, we can type the echo $HISTSIZE command. To erase this history, let's type export HISTSIZE=0. From this point, the shell will not store any command history, that is, if we press the up arrow key, it will not show the last command. These commands can also be placed in a .bashrc file on Linux hosts. The following screenshot shows that we have verified if our last 500 commands are stored. It also shows what happens after we erase them: It is a best practice to set this command prior to using any commands on a compromised system, so that nothing is stored upfront. You could log out and log back in once the export HISTSIZE=0 command is set to clear your history as well. You should also do this on your C&C server once you conclude your penetration test if you have any concerns of being investigated. A more aggressive and quicker way to remove our history file on a Linux system is to shred it with the shred –zu /root/.bash_history command. This command overwrites the history file with zeros and then deletes the log files. We can verify this using the less /root/.bash_history command to see if there is anything left in your history file, as shown in the following screenshot: Masking our network footprint Anonymity is a key ingredient when performing our attacks, unless we don't mind someone being able to trace us back to our location and giving up our position. Because of this, we need a way to hide or mask where we are coming from. This approach is perfect for a proxy or groups of proxies if we really want to make sure we don't leave a trail of breadcrumbs. When using a proxy, the source of an attack will look as though it is coming from the proxy instead of the real source. Layering multiple proxies can help provide an onion effect, in which each layer hides the other, and makes it very difficult to determine the real source during any forensic investigation. Proxies come in various types and flavors. There are websites devoted for hiding our source online, and with a quick Google search, we can see some of the most popular like hide.me, Hidestar, NewIPNow, ProxySite and even AnonyMouse. Following is a screenshot from NewIPNow website. Administrators of proxies can see all traffic as well as identify both the target and the victims that communicate through their proxy. It is highly recommended that you research any proxy prior to using it as some might use information captured without your permission. This includes providing forensic evidence to authorities or selling your sensitive information. Using ProxyChains Now, if web based proxies are not what we are looking for, we can use our Raspberry Pi as a proxy server utilizing the ProxyChains application. ProxyChains is very easy application to setup and start using. First, we need to install the application. This can be accomplished this by running the following command from the CLI: root@kali:~# apt-get install proxychains Once installed, we just need to edit the ProxyChains configuration located at /etc/proxychains.conf, and put in the proxy servers we would like to use: There are lots of options out there for finding public proxies. We should certainly use with some caution, as some proxies will use our data without our permission, so we'll be sure to do our research prior to using one. Once we have one picked out and have updated our proxychains.conf file, we can test it out. To use ProxyChains, we just need to follow the following syntax: proxychains <command you want tunneled and proxied> <opt args> Based on that syntax, to run a nmap scan, we would use the following command: root@kali:~# proxychains nmap 192.168.245.0/24 ProxyChains-3.1 (http://proxychains.sf.net) Starting Nmap 7.25BETA1 ( https://nmap.org ) Clearing the data off the Raspberry Pi Now that we have covered our tracks on the network side, as well as on the endpoint, all we have left is any of the equipment that we have left behind. This includes our Raspberry Pi. To reset our Raspberry Pi back to factory defaults, we can refer back to installing Kali Linux. For re-installing Kali or the NOOBS software. This will allow us to have clean image running once again. If we had cloned your golden image we could just re-image our Raspberry Pi with that image. If we don't have the option to re-image or reinstall your Raspberry Pi, we do have the option to just destroy the hardware. The most important piece to destroy would be the microSD card (see image following), as it contains everything that we have done on the Pi. But, we may want to consider destroying any of the interfaces that you may have used (USB WiFi, Ethernet or Bluetooth adapters), as any of those physical MAC addresses may have been recorded on the target network, and therefore could prove that device was there. If we had used our onboard interfaces, we may even need to destroy the Raspberry Pi itself. If the Raspberry Pi is in a location that we cannot get to reclaim it or to destroy it, our only option is to remotely corrupt it so that we can remove any clues of our attack on the target. To do this, we can use the rm command within Kali. The rm command is used to remove files and such from the operating systems. As a cool bonus, rm has some interesting flags that we can use to our advantage. These flags include the –r and the –f flag. The –r flag indicates to perform the operation recursively, so everything in that directory and preceding will be removed while the –f flag is to force the deletion without asking. So running the command rm –fr * from any directory will remove all contents within that directory and anything preceding that. Where this command gets interesting is if we run it from / a.k.a. the top of the directory structure. Since the command will remove everything in that directory and preceding, running it from the top level will remove all files and hence render that box unusable. As any data forensics person will tell us, that data is still there, just not being used by the operation system. So, we really need to overwrite that data. We can do this by using the dd command. We used dd back when we were setting up the Raspberry Pi. We could simply use the following to get the job done: dd if=/dev/urandom of=/dev/sda1 (where sda1 is your microSD card) In this command we are basically writing random characters to the microSD card. Alternatively, we could always just reformat the whole microSD card using the mkfs.ext4 command: mkfs.ext4 /dev/sda1 ( where sda1 is your microSD card ) That is all helpful, but what happens if we don't want to destroy the device until we absolutely need to – as if we want the ability to send over a remote destroy signal? Kali Linux now includes a LUKS Nuke patch with its install. LUKS allows for a unified key to get into the container, and when combined with Logical Volume Manager (LVM), can created an encrypted container that needs a password in order to start the boot process. With the Nuke option, if we specify the Nuke password on boot up instead of the normal passphrase, all the keys on the system are deleted and therefore rendering the data inaccessible. Here are some great links to how and do this, as well as some more details on how it works: https://www.kali.org/tutorials/nuke-kali-linux-luks/ http://www.zdnet.com/article/developers-mull-adding-data-nuke-to-kali-linux/ Summary In this article we sawreports themselves are what our customer sees as our product. It should come as no surprise that we should then take great care to ensure they are well organized, informative, accurate and most importantly, meet the customer's objectives. Resources for Article: Further resources on this subject: Penetration Testing [article] Wireless and Mobile Hacks [article] Building Your Application [article]
Read more
  • 0
  • 0
  • 22518

article-image-using-elm-and-jquery-together
Eduard Kyvenko
21 Dec 2016
6 min read
Save for later

Using Elm and jQuery Together

Eduard Kyvenko
21 Dec 2016
6 min read
jQuery as a library has proven itself as an invaluable asset in the process of development by leveraging the complexity of the DOM API from web programmer’s shoulders. For a very long time, jQuery Plugins were one of the best ways of writing modular and reusable code. In this post, I will explain how to integrate existing jQuery plugin into your Elm application. At first glance, this might be a bad idea, but bear with me; I will show you how you can benefit from this combination. The example relies on Create Elm App to simplify the setup for minimal implementation of the jQuery plugin integration. As Richard Feldman mentions in his talk from ReactiveConf 2016, you could, and actually should, take over the opportunity of integrating existing JavaScript assets into your Elm application. As an example, I will use Select2 library, which I have been using for a very long time. It is very useful for numerous use cases as native HTML5 implementation of <select> element is quite limited. The view You are expected to have some experience with Elm and understand application life cycle. The first essential rule to follow is that when establishing interoperation with any User Interface, components should be written in JavaScript. Do not ever mutate the DOM tree, which was produced by the Elm application. Introducing mutations to elements, defined in the view function, will most likely introduce a runtime error. As long as you follow this simple rule, you can enjoy the best parts of both worlds, having complex DOM interactions in jQuery, and strict state management in Elm. Hosting a DOM node for a root element of a jQuery plugin is safe, as long as you don’t use the Navigation package for client-side routing. In that case, you will have to define additional logic for detaching the plugin and initializing it back when the route changes. I’ll stick to the minimal example and just define a root node for a future Select2 container. view : Model -> Html Msg view model = div [] [ text (toString model) , div [ id "select2-container" ] [] ] You should be familiar with the concept of JavaScript Interop with Elm; we will use both the incoming and outgoing port to establish the communication with Select2. Modeling the problem The JavaScript world gives you way more options on how you might wire the integration, but don’t be distracted by the opportunities because there is an easy solution already. The second rule of robust interop is keeping the state management to Elm. The application should own the data for initializing the plugin and control its behavior to make it predictable. That way, you will have an extra level of security, almost with no effort. I have defined the options for the future Select2 UI element in my app using a Dictionary. This data structure is very convenient for rendering <select> node with <options> inside. options : Dict String String options = Dict.fromList [ ( "US", "United States" ) , ( "UK", "United Kingdom" ) , ( "UY", "Uruguay" ) , ( "UZ", "Uzbekistan" ) ] Upon initialization, the Elm app will send the data for future options through the port. port output : List ( String, String ) -> Cmd msg Let's take a quick look at the initialization process here as you might have a slightly different setup; however, the core idea will always follow the same routine. Include all of the libraries and application, and embed it into the DOM node. // Import jQuery and Select2. var $ = require('jquery'); var select2 = require('select2'); // Import Elm application and initialize it. var Elm = require('./Main.elm'); var root = document.getElementById('root'); var App = Elm.Main.embed(root); When this code is executed, your app is ready to send the data to outgoing ports. Port communication You can use the initial state for sending data to JavaScript right after the startup, which is probably the best time for initializing all user interface components. init : ( Model, Cmd Msg ) init = ( Model Nothing, output (Dict.toList options) ) To retrieve this data, you have to subscribe to the port in the JavaScript land and define some initialization logic for a jQuery plugin. When the output port will receive the data for rendering options, the application is ready for initialization of the plugin. The only implication is that we have to render the <select> element with jQuery or any other templating library. When all the required DOM nodes are rendered, the plugin can be initialized. I will use the change event on Select2 instance and notify the Elm application through the input port. To simplify the setup, we can trigger the change event right away so the state of the jQuery plugin instance and Elm application is synchronized. App.ports.output.subscribe(function (options) { var $selectContainer = $('#select2-container'); // Generate DOM tree with <select> and <option> inside and embed it into the root node. var select = $('<select>', { html: options.map(function (option) { return $('<option>', { value: option[ 0 ], text: option[ 1 ] }) }) }).appendTo($selectContainer); // Initialize Select2, when everything is ready. var select2 = $(select).select2(); // Setup change port subscription. select2.on('change', function (event) { App.ports.input.send(event.target.value); }); // Trigger the change for initial value. select2.trigger('change'); }); The JavaScript part is simplified as much as possible to demonstrate the main idea behind subscribing to outgoing port and sending data through an incoming port. You can have a look at the running Elm application with Select2. The source code is available at GitHub in halfzebra/elm-examples. Conclusion Elm can get along with pretty much any JavaScript framework, or a library if said framework or library is not mutating the DOM state of Elm application. The Elm Architecture will help you make Vanilla JavaScript components more reliable. Even though ports cannot transport any abstract data types, except built-in primitives, which is slightly limiting, Elm has high interop potential with most of the existing JavaScript technology. If done right, you can get the extreme diversity of functionality from a huge JavaScript community under the control of the most reliable state machine on the frontend! About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 4557
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-object-oriented-javascript
Packt
21 Dec 2016
9 min read
Save for later

Object-Oriented JavaScript

Packt
21 Dec 2016
9 min read
In this article by Ved Antani, author of the book Object Oriented JavaScript - Third Edition, we will learn that you need to know about object-oriented JavaScript. In this article, we will cover the following topics: ECMAScript 5 ECMAScript 6D Object-oriented programming (For more resources related to this topic, see here.) ECMAScript 5 One of the most important milestone in ECMAScript revisions was ECMAScript5 (ES5), officially accepted in December 2009. ECMAScript5 standard is implemented and supported on all major browsers and server-side technologies. ES5 was a major revision because apart from several important syntactic changes and additions to the standard libraries, ES5 also introduced several new constructs in the language. For instance, ES5 introduced some new objects and properties, and also the so-called strict mode. Strict mode is a subset of the language that excludes deprecated features. The strict mode is opt-in and not required, meaning that if you want your code to run in the strict mode, you will declare your intention using (once per function, or once for the whole program) the following string: "use strict"; This is just a JavaScript string, and it's ok to have strings floating around unassigned to any variable. As a result, older browsers that don't speak ES5 will simply ignore it, so this strict mode is backwards compatible and won't break older browsers. For backwards compatibility, all the examples in this book work in ES3, but at the same time, all the code in the book is written so that it will run without warnings in ES5's strict mode. Strict mode in ES6 While strict mode is optional in ES5, all ES6 modules and classes are strict by default. As you will see soon, most of the code we write in ES6 resides in a module; hence, strict mode is enforced by default. However, it is important to understand that all other constructs do not have implicit strict mode enforced. There were efforts to make newer constructs, such as arrow and generator functions, to also enforce strict mode, but it was later decided that doing so will result in very fragment language rules and code. ECMAScript 6 ECMAScript6 revision took a long time to finish and was finally accepted on 17th June, 2015. ES6 features are slowly becoming part of major browsers and server technologies. It is possible to use transpilers to compile ES6 to ES5 and use the code on environments that do not yet support ES6 completely. ES6 substantially upgrades JavaScript as a language and brings in very exciting syntactical changes and language constructs. Broadly, there are two kinds of fundamental changes in this revision of ECMAScript, which are as follows: Improved syntax for existing features and editions to the standard library; for example, classes and promises New language features; for example, generators ES6 allows you to think differently about your code. New syntax changes can let you write code that is cleaner, easier to maintain, and does not require special tricks. The language itself now supports several constructs that required third-party modules earlier. Language changes introduced in ES6 needs serious rethink in the way we have been coding in JavaScript. A note on the nomenclature—ECMAScript6, ES6, and ECMAScript 2015 are the same, but used interchangeably. Browser support for ES6 Majority of the browsers and server frameworks are on their way toward implementing ES6 features. You can check out the what is supported and what is not by clicking: http://kangax.github.io/compat-table/es6/ Though ES6 is not fully supported on all the browsers and server frameworks, we can start using almost all features of ES6 with the help of transpilers. Transpilers are source-to-source compilers. ES6 transpilers allow you to write code in ES6 syntax and compile/transform them into equivalent ES5 syntax, which can then be run on browsers that do not support the entire range of ES6 features.The defacto ES6 transpiler at the moment is Babel Object-oriented programming Let's take a moment to review what people mean when they say object-oriented, and what the main features of this programming style are. Here's a list of some concepts that are most often used when talking about object-oriented programming (OOP): Object, method, and property Class Encapsulation Inheritance Polymorphism Let's take a closer look into each one of these concepts. If you're new to the object-oriented programming lingo, these concepts might sound too theoretical, and you might have trouble grasping or remembering them from one reading. Don't worry, it does take a few tries, and the subject can be a little dry at a conceptual level. Objects As the name object-oriented suggests, objects are important. An object is a representation of a thing (someone or something), and this representation is expressed with the help of a programming language. The thing can be anything—a real-life object, or a more convoluted concept. Taking a common object, a cat, for example, you can see that it has certain characteristics—color, name, weight, and so on—and can perform some actions—meow, sleep, hide, escape, and so on. The characteristics of the object are called properties in OOP-speak, and the actions are called methods. Classes In real life, similar objects can be grouped based on some criteria. A hummingbird and an eagle are both birds, so they can be classified as belonging to some made up Birds class. In OOP, a class is a blueprint or a recipe for an object. Another name for object is instance, so we can say that the eagle is one concrete instance of the general Birds class. You can create different objects using the same class because a class is just a template, while the objects are concrete instances based on the template. There's a difference between JavaScript and the classic OO languages such as C++ and Java. You should be aware right from the start that in JavaScript, there are no classes; everything is based on objects. JavaScript has the notion of prototypes, which are also objects. In a classic OO language, you'd say something like—create a new object for me called Bob, which is of class Person. In a prototypal OO language, you'd say—I'm going to take this object called Bob's dad that I have lying around (on the couch in front of the TV?) and reuse it as a prototype for a new object that I'll call Bob. Encapsulation Encapsulation is another OOP related concept, which illustrates the fact that an object contains (encapsulates) the following: Data (stored in properties) The means to do something with the data (using methods) One other term that goes together with encapsulation is information hiding. This is a rather broad term and can mean different things, but let's see what people usually mean when they use it in the context of OOP. Imagine an object, say, an MP3 player. You, as the user of the object, are given some interface to work with, such as buttons, display, and so on. You use the interface in order to get the object to do something useful for you, like play a song. How exactly the device is working on the inside, you don't know, and, most often, don't care. In other words, the implementation of the interface is hidden from you. The same thing happens in OOP when your code uses an object by calling its methods. It doesn't matter if you coded the object yourself or it came from some third-party library; your code doesn't need to know how the methods work internally. In compiled languages, you can't actually read the code that makes an object work. In JavaScript, because it's an interpreted language, you can see the source code, but the concept is still the same—you work with the object's interface without worrying about its implementation. Another aspect of information hiding is the visibility of methods and properties. In some languages, objects can have public, private, and protected methods and properties. This categorization defines the level of access the users of the object have. For example, only the methods of the same object have access to the private methods, while anyone has access to the public ones. In JavaScript, all methods and properties are public, but we'll see that there are ways to protect the data inside an object and achieve privacy. Inheritance Inheritance is an elegant way to reuse existing code. For example, you can have a generic object, Person, which has properties such as name and date_of_birth, and which also implements the walk, talk, sleep, and eat functionality. Then, you figure out that you need another object called Programmer. You can reimplement all the methods and properties that a Person object has, but it will be smarter to just say that the Programmer object inherits a Person object, and save yourself some work. The Programmer object only needs to implement more specific functionality, such as the writeCode method, while reusing all of the Person object's functionality. In classical OOP, classes inherit from other classes, but in JavaScript, as there are no classes, objects inherit from other objects. When an object inherits from another object, it usually adds new methods to the inherited ones, thus extending the old object. Often, the following phrases can be used interchangeably—B inherits from A and B extends A. Also, the object that inherits can pick one or more methods and redefine them, customizing them for its own needs. This way, the interface stays the same and the method name is the same, but when called on the new object, the method behaves differently. This way of redefining how an inherited method works is known as overriding. Polymorphism In the preceding example, a Programmer object inherited all of the methods of the parent Person object. This means that both objects provide a talk method, among others. Now imagine that somewhere in your code, there's a variable called Bob, and it just so happens that you don't know if Bob is a Person object or a Programmer object. You can still call the talk method on the Bob object and the code will work. This ability to call the same method on different objects, and have each of them respond in their own way, is called polymorphism. Summary In this article, you learned about how JavaScript came to be and where it is today. You were also introduced to ECMAScript 5 and ECMAScript 6, We also discussed about some of the object-oriented programming concepts. Resources for Article: Further resources on this subject: Diving into OOP Principles [article] Prototyping JavaScript [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 12147

article-image-exploring-new-reality-oculus-rift
Packt
21 Dec 2016
7 min read
Save for later

Exploring a New Reality with the Oculus Rift

Packt
21 Dec 2016
7 min read
In this article by Jack Donovan the author of the book Mastering Oculus Rift Development explains about virtual reality. What made you feel like you were truly immersed in a game world for the first time? Was it graphics that looked impressively realistic, ambient noise that perfectly captured the environment and mood, or the way the game's mechanics just started to feel like a natural reflex? Game developers constantly strive to replicate scenarios that are as real and as emotionally impactful as possible, and they've never been as close as they are now with the advent of virtual reality. Virtual reality has been a niche market since the early 1950s, often failing to evoke a meaningful sense of presence that the concept hinges on—that is, until the first Oculus Rift prototype was designed in 2010 by Oculus founder Palmer Luckey. The Oculus Rift proved that modern rendering and display technology was reaching a point that an immersive virtual reality could be achieved, and that's when the new era of VR development began. Today, virtual reality development is as accessible as ever, comprehensively supported in the most popular off-the-shelf game development engines such as Unreal Engine and Unity 3D. In this article, you'll learn all of the essentials that go into a complete virtual reality experience, and master the techniques that will enable you to bring any idea you have into VR. This article will cover everything you need to know to get started with virtual reality, including the following points: The concept of virtual reality The importance of intent in VR design Common limitations of VR games (For more resources related to this topic, see here.) The concept of virtual reality Virtual reality has taken many forms and formats since its inception, but this article will be focused on modern virtual reality experienced with a Head-Mounted Display (HMD). HMDs like the Oculus Rift are typically treated like an extra screen attached to your computer (more on that later) but with some extra components that enable it to capture its own orientation (and position, in some cases). This essentially amounts to a screen that sits on your head and knows how it moves, so it can mirror your head movements in the VR experience and enable you to look around. In the following example from the Oculus developer documentation, you can see how the HMD translates this rotational data into the game world: Depth perception Depth perception is another big principle of VR. Because the display of the HMD is always positioned right in front of the user's eyes, the rendered image is typically split into two images: one per eye, with each individual image rendered from the position of that eye. You can observe the difference between normal rendering and VR rendering in the following two images. This first image is how normal 3D video games are rendered to a computer screen, created based on the position and direction of a virtual camera in the game world: This next image shows how VR scenes are rendered, using a different virtual camera for each eye to create a stereoscopic depth effect: Common limitations of VR games While virtual reality provides the ability to immerse a player's senses like never before, it also creates some new, unique problems that must be addressed by responsible VR developers. Locomotion sickness Virtual reality headsets are meant to make you feel like you're somewhere else, and it only makes sense that you'd want to be able to explore that somewhere. Unfortunately, common game mechanics like traditional joystick locomotion are problematic for VR. Our inner ears and muscular system are accustomed to sensing inertia while we move from place to place, so if you were to push a joystick forward to walk in virtual reality, your body would get confused when it sensed that you're still in a chair. Typically when there's a mismatch between what we're seeing and what we're feeling, our bodies assume that a nefarious poison or illness is at work, and they prepare to rid the body of the culprit; that's the motion sickness you feel when reading in a car, standing on a boat, and yes, moving in virtual reality. This doesn't mean that we have to prevent users from moving in VR, we just might want to be more clever about it—more on that later. The primary cause of nausea with traditional joystick movement in VR is acceleration; your brain gets confused when picking up speed or slowing down, but not so much when it's moving at a constant rate (think of being stationary in a car that's moving at a constant speed). Rotation can get even more complicated, because rotating smoothly even at a constant speed causes nausea. Some developers get around this by using hard increments instead of gradual acceleration, such as rotating in 30 degree "snaps" once per second instead of rotating smoothly. Lack of real-world vision One of the potentially clumsiest aspects of virtual reality is getting your hands where they need to be without being able to see them. Whether you're using a gamepad, keyboard, or motion controller, you'll likely need to use your hands to interact with VR—which you can't see with an HMD sitting over your eyes. It's good practice to centralize input around resting positions (i.e. the buttons naturally closest to your thumbs on a gamepad or the home row of a computer keyboard), but shy away from anything that requires complex precise input, like writing sentences on a keyboard or hitting button combos on a controller. Some VR headsets, such as the HTC Vive, have a forward-facing camera (sometimes called a passthrough camera) that users can choose to view in VR, enabling basic interaction with the real world without taking the headset off. The Oculus Rift doesn't feature a built-in camera, but you could still display the feed from an external camera on any surface in virtual reality. Even before modern VR, developers were creating applications that overlay smart information over what a camera is seeing; that's called augmented reality (AR). Experiences that ride the line between camera output and virtual environments are called mixed reality (MR). Unnatural head movements You may not have thought about it before, but looking around in a traditional first-person shooter (FPS) is quite different than looking around using your head. The right analog stick is typically used to direct the camera and make adjustments as necessary, but in VR, players actually move their head instead of using their thumb to move their virtual head. Don't expect players in VR to be able to make the same snappy pivots and 180-degree turns on a dime that are trivial in a regular console game. Summary In this article, we approached the topic of virtual reality from a fundamental level. The HMD is the crux of modern VR simulation, and it uses motion tracking components as well as peripherals like the constellation system to create immersive experiences that transport the player into a virtual world. Now that we've scratched the surface of the hardware, development techniques and use cases of virtual reality—particularly the Oculus Rift—you're probably beginning to think about what you'd like to create in virtual reality yourself Resources for Article: Further resources on this subject: Cardboard is Virtual Reality for Everyone [article] Virtually Everything for Everyone [article] Customizing the Player Character [article]
Read more
  • 0
  • 0
  • 18314

article-image-how-build-javascript-microservices-platform
Andrea Falzetti
20 Dec 2016
6 min read
Save for later

How to build a JavaScript Microservices platform

Andrea Falzetti
20 Dec 2016
6 min read
Microservices is one of the most popular topics in software development these days, as well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project. The challenge The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings! Technology stack For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications. And yes, we do like JavaScript! Node.js Microservices Toolbox There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have. I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it. Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes. It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh: ssh node@<IP> cd ./project/frontend-api git pull Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice: pm2 restart frontend-api Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did. Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path: api.example.com/chat-bot api.example.com/admin-backend api.example.com/frontend-api … This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs. Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network. What does a microservice look like, and how do they talk to each other? We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2: pm2 list pm2 status <microservice-pid> Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape. The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets. I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change. Conclusions In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful. About the author Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 13625

article-image-clean-your-code
Packt
19 Dec 2016
23 min read
Save for later

Clean Up Your Code

Packt
19 Dec 2016
23 min read
 In this article by Michele Bertoli, the author of the book React Design Patterns and Best Practices, we will learn to use JSX without any problems or unexpected behaviors, it is important to understand how it works under the hood and the reasons why it is a useful tool to build UIs. Our goal is to write clean and maintainable JSX code and to achieve that we have to know where it comes from, how it gets translated to JavaScript and which features it provides. In the first section, we will do a little step back but please bear with me because it is crucial to master the basics to apply the best practices. In this article, we will see: What is JSX and why we should use it What is Babel and how we can use it to write modern JavaScript code The main features of JSX and the differences between HTML and JSX The best practices to write JSX in an elegant and maintainable way (For more resources related to this topic, see here.) JSX Let's see how we can declare our elements inside our components. React gives us two ways to define our elements: the first one is by using JavaScript functions and the second one is by using JSX, an optional XML-like syntax. In the beginning, JSX is one of the main reasons why people fails to approach to React because looking at the examples on the homepage and seeing JavaScript mixed with HTML for the first time does not seem right to most of us. As soon as we get used to it, we realize that it is very convenient exactly because it is similar to HTML and it looks very familiar to anyone who already created User Interfaces on the web. The opening and closing tags, make it easier to represent nested trees of elements, something that would have been unreadable and hard to maintain using plain JavaScript. Babel In order to use JSX (and es2015) in our code, we have to install Babel. First of all, it is important to understand clearly the problems it can solve for us and why we need to add a step in our process. The reason is that we want to use features of the language that have not been implemented yet in the browser, our target environment. Those advanced features make our code more clean for the developers but the browser cannot understand and execute it. So the solution is to write our scripts in JSX and es2015 and when we are ready to ship, we compile the sources into es5, the standard specification that is implemented in the major browsers today. Babel is a popular JavaScript compiler widely adopted within the React community: It can compile es2015 code into es5 JavaScript as well as compile JSX into JavaScript functions. The process is called transpilation, because it compiles the source into a new source rather than into an executable. Using it is pretty straightforward, we just install it: npm install --global babel-cli If you do not like to install it globally (developers usually tend to avoid it), you can install Babel locally to a project and run it through a npm script but for the purpose of this article a global instance is fine. When the installation is completed we can run the following command to compile our JavaScript files: babel source.js -o output.js One of the reasons why Babel is so powerful is because it is highly configurable. Babel is just a tool to transpile a source file into an output file but to apply some transformations we need to configure it. Luckily, there are some very useful presets of configurations which we can easily install and use: npm install --global babel-preset-es2015 babel-preset-react Once the installation is done, we create a configuration file called .babelrc and put the following lines into it to tell Babel to use those presets: { "presets": [ "es2015", "React" ] } From this point on we can write es2015 and JSX in our source files and execute the output files in the browser. Hello, World! Now that our environment has been set up to support JSX, we can dive into the most basic example: generating a div element. This is how you would create a div with React'screateElementfunction: React.createElement('div') React has some shortcut methods for DOM elements and the following line is equivalent to the one above: React.DOM.div() This is the JSX for creating a div element: <div /> It looks identical to the way we always used to create the markup of our HTML pages. The big difference is that we are writing the markup inside a .js file but it is important to notice that JSX is only a syntactic sugar and it gets transpiled into the JavaScript before being executed in the browser. In fact, our <div /> is translated into React.createElement('div') when we run Babel and that is something we should always keep in mind when we write our templates. DOM elements and React components With JSX we can obviously create both HTML elements and React components, the only difference is if they start with a capital letter or not. So for example to render an HTML button we use <button />, while to render our Button components we use <Button />. The first button gets transpiled into: React.createElement('button') While the second one into: React.createElement(Button) The difference here is that in the first call we are passing the type of the DOM element as a string while in the second one we are passing the component itself, which means that it should exist in the scope to work. As you may have noticed, JSX supports self-closing tags which are pretty good to keep the code terse and they do not require us to repeat unnecessary tags. Props JSX is very convenient when your DOM elements or React components have props, in fact following XML is pretty easy to set attributes on elements: <imgsrc="https://facebook.github.io/react/img/logo.svg" alt="React.js" /> The equivalent in JavaScript would be: React.createElement("img", { src: "https://facebook.github.io/react/img/logo.svg", alt: "React.js" }); Which is way less readable and even with only a couple of attributes it starts getting hard to be read without a bit of reasoning. Children JSX allows you to define children to describe the tree of elements and compose complex UIs. A basic example could be a link with a text inside it: <a href="https://facebook.github.io/react/">Click me!</a> Which would be transpiled into: React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ); Our link can be enclosed inside a div for some layout requirements and the JSX snippet to achieve that is the following: <div> <a href="https://facebook.github.io/react/">Click me!</a> </div> With the JSX equivalent being: React.createElement( "div", null, React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ) ); It becomes now clear how the XML-like syntax of JSX makes everything more readable and maintainable but it is always important to know what is the JavaScript parallel of our JSX to take control over the creation of elements. The good part is that we are not limited to have elements as children of elements but we can use JavaScript expressions like functions or variables. For doing that we just have to put the expression inside curly braces: <div> Hello, {variable}. I'm a {function()}. </div> The same applies to non-string attributes: <a href={this.makeHref()}>Click me!</a> Differences with HTML So far we have seen how the JSX is similar to HTML, let's now see the little differences between them and the reasons why they exist. Attributes We always have to keep in mind that JSX is not a standard language and it gets transpiled into JavaScript and because of that, some attributes cannot be used. For example instead of class we have to use className and instead of for we have to use htmlFor: <label className="awesome-label"htmlFor="name" /> The reason is that class and for are reserved word in JavaScript. Style A pretty significant difference is the way the style attribute works.The style attribute does not accept a CSS string as the HTML parallel does, but it expects a JS Object where the style names are camelCased. <div style={{ backgroundColor: 'red' }} /> Root One important difference with HTML worth mentioning is that since JSX elements get translated into JavaScript functions and you cannot return two functions in JavaScript, whenever you have multiple elements at the same level you are forced to wrap them into a parent. Let's see a simple example: <div /> <div /> Gives us the following error: Adjacent JSX elements must be wrapped in an enclosing tag While this: <div> <div /> <div /> </div> It is pretty annoying having to add unnecessary divtags just for making JSX work but the React developers are trying to find a solution: https://github.com/reactjs/core-notes/blob/master/2016-07/july-07.md Spaces There's one thing that could be a little bit tricky at the beginning and again it regards the fact that we should always have in mind that JSX is not HTML, even if it has an XML-like syntax. JSX, in fact, handles the spaces between text and elements differently from HTML in a way that's counter-intuitive. Consider the following snippet: <div> <span>foo</span> bar <span>baz</span> </div> In the browser, which interprets HTML, this code would give you foo bar baz, which is exactly what we expect it to be. In JSX instead, the same code would be rendered as foobarbaz and that is because the three nested lines get transpiled as individual children of the div element, without taking in account the spaces. A common solution is to put a space explicitly between the elements: <div> <span>foo</span> {''} bar {''} <span>baz</span> </div> As you may have noticed, we are using an empty string wrapped inside a JavaScript expression to force the compiler to apply the space between the elements. Boolean Attributes A couple of more things worth mentioning before starting for real regard the way you define Boolean attributes in JSX. If you set an attribute without a value, JSX assumes that its value is true, following the same behavior of the HTML disabled attribute, for example. That means that if we want to set an attribute to false we have to declare it explicitly to false: <button disabled /> React.createElement("button", { disabled: true }); And: <button disabled={false} /> React.createElement("button", { disabled: false }); This can be confusing in the beginning because we may think that omitting an attribute would mean false but it is not like that: with React we should always be explicit to avoid confusion. Spread attributes An important feature is the spread attributes operator, which comes from the Rest/Spread Properties for ECMAScript proposal and it is very convenient whenever we want to pass all the attributes of a JavaScript object to an element. A common practice that leads to fewer bugs is not to pass entire JavaScript objects down to children by reference but using their primitive values which can be easily validated making components more robust and error proof. Let's see how it works: const foo = { bar: 'baz' } return <div {...foo} /> That gets transpiled into this: var foo = { bar: 'baz' }; return React.createElement('div', foo); JavaScript templating Last but not least, we started from the point that one of the advantages of moving the templates inside our components instead of using an external template library is that we can use the full power of JavaScript, so let's start looking at what it means. The spread attributes is obviously an example of that and another common one is that JavaScript expressions can be used as attributes values by wrapping them into curly braces: <button disabled={errors.length} /> Now that we know how JSX works and we master it, we are ready to see how to use it in the right way following some useful conventions and techniques. Common Patterns Multi-line Let's start with a very simple one: as we said, on the main reasons why we should prefer JSX over React'screateClass is because of its XML-like syntax and the way balanced opening/closing tags are perfect to represent a tree of nodes. Therefore, we should try to use it in the right way and get the most out of it. One example is that, whenever we have nested elements, we should always go multi-line: <div> <Header /> <div> <Main content={...} /> </div> </div> Instead of: <div><Header /><div><Main content={...} /></div></div> Unless the children are not elements, such as text or variables. In that case it can make sense to remain on the same line and avoid adding noise to the markup, like: <div> <Alert>{message}</Alert> <Button>Close</Button> </div> Always remember to wrap your elements inside parenthesis when you write them in multiple lines. In fact, JSX always gets replaced by functions and functions written in a new line can give you an unexpected result. Suppose for example that you are returning JSX from your render method, which is how you create UIs in React. The following example works fine because the div is in the same line of the return: return <div /> While this is not right: return <div /> Because you would have: return; React.createElement("div", null); That is why you have to wrap the statement into parenthesis: return ( <div /> ) Multi-properties A common problem in writing JSX comes when an element has multiples attributes. One solution would be to write all the attributes on the same line but this would lead to very long lines which we do not want in our code (see in the next section how to enforce coding style guides). A common solution is to write each attribute on a new line with one level of indentation and then putting the closing bracket aligned with the opening tag: <button foo="bar" veryLongPropertyName="baz" onSomething={this.handleSomething} /> Conditionals Things get more interesting when we start working with conditionals, for example if we want to render some components only when some conditions are matched. The fact that we can use JavaScript is obviously a plus but there are many different ways to express conditions in JSX and it is important to understand the benefits and the problems of each one of those to write code that is readable and maintainable at the same time. Suppose we want to show a logout button only if the user is currently logged in into our application. A simple snippet to start with is the following: let button if (isLoggedIn) { button = <LogoutButton /> } return <div>{button}</div> It works but it is not very readable, especially if there are multiple components and multiple conditions. What we can do in JSX is using an inline condition: <div> {isLoggedIn&&<LoginButton />} </div> This works because if the condition is false, nothing gets rendered but if the condition is true the createElement function of the Loginbutton gets called and the element is returned to compose the resulting tree. If the condition has an alternative, the classic if…else statement, and we want for example to show a logout button if the user is logged in and a login button otherwise, we can either use JavaScript's if…else: let button if (isLoggedIn) { button = <LogoutButton /> } else { button = <LoginButton /> } return <div>{button}</div> Alternatively, better, using a ternary condition, which makes our code more compact: <div> {isLoggedIn ? <LogoutButton /> : <LoginButton />} </div> You can find the ternary condition used in popular repositories like the Redux real world example (https://github.com/reactjs/redux/blob/master/examples/real-world/src/components/List.js) where the ternary is used to show a loading label if the component is fetching the data or "load more" inside a button according to the value of the isFetching variable: <button [...]> {isFetching ? 'Loading...' : 'Load More'} </button> Let's now see what is the best solution when things get more complicated and, for example, we have to check more than one variable to determine if render a component or not: <div> {dataIsReady&& (isAdmin || userHasPermissions) &&<SecretData />} </div> In this case is clear that using the inline condition is a good solution but the readability is strongly impacted so what we can do instead is creating a helper function inside our component and use it in JSX to verify the condition: canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData() &&<SecretData />} </div> As you can see, this change makes the code more readable and the condition more explicit. Looking into this code in six month time you will still find it clear just by reading the name of the function. If we do not like using functions you can use object's getters which make the code more elegant. For example, instead of declaring a function we define a getter: get canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData&&<SecretData />} </div> The same applies to computed properties: suppose you have two single properties for currency and value. Instead of creating the price string inside you render method you can create a class function for that: getPrice() { return `${this.props.currency}${this.props.value}` } <div>{this.getPrice()}</div> Which is better because it is isolated and you can easily test it in case it contains logic. Alternatively going a step further and, as we have just seen, use getters: get price() { return `${this.props.currency}${this.props.value}` } <div>{this.price}</div> Going back to conditional statements, there are other solutions that require using external dependencies. A good practice is to avoid external dependencies as much as we can to keep our bundle smaller but it may be worth it in this particular case because improving the readability of our templates is a big win. The first solution is renderIf which we can install with: npm install --save render-if And easily use in our projects like this: const { dataIsReady, isAdmin, userHasPermissions } = this.props constcanShowSecretData = renderIf(dataIsReady&& (isAdmin || userHasPermissions)) <div> {canShowSecretData(<SecretData />)} </div> We wrap our conditions inside the renderIf function. The utility function that gets returned can be used as a function that receives the JSX markup to be shown when the condition is true. One goal that we should always keep in mind is never to add too much logic inside our components. Some of them obviously will require a bit of it but we should try to keep them as simple and dumb as possible in a way that we can spot and fix error easily. At least, we should try to keep the renderIf method as clean as possible and for doing that we could use another utility library called React Only If which let us write our components as if the condition is always true by setting the conditional function using a higher-order component. To use the library we just need to install it: npm install --save react-only-if Once it is installed, we can use it in our apps in the following way: constSecretDataOnlyIf = onlyIf( SecretData, ({ dataIsReady, isAdmin, userHasPermissions }) => { return dataIsReady&& (isAdmin || userHasPermissions) } ) <div> <SecretDataOnlyIf dataIsReady={...} isAdmin={...} userHasPermissions={...} /> </div>  As you can see here there is no logic at all inside the component itself. We pass the condition as the second parameter of the onlyIf function when the condition is matched, the component gets rendered. The function that is used to validate the condition receives the props, the state, and the context of the component. In this way we avoid polluting our component with conditionals so that it is easier to understand and reason about. Loops A very common operation in UI development is displaying lists of items. When it comes to showing lists we realize that using JavaScript as a template language is a very good idea. If we write a function that returns an array inside our JSX template, each element of the array gets compiled into an element. As we have seen before we can use any JavaScript expressions inside curly braces and the more obvious way to generate an array of elements, given an array of objects is using map. Let's dive into a real-world example, suppose you have a list of users, each one with a name property attached to it. To create an unordered list to show the users you can do: <ul> {users.map(user =><li>{user.name}</li>)} </ul> This snippet is in incredibly simple and incredibly powerful at the same time, where the power of the HTML and the JavaScript converge. Control Statements Conditional and loops are very common operations in UI templates and you may feel wrong using the JavaScript ternary or the map function to do that. JSX has been built in a way that it only abstract the creation of the elements leaving the logic parts to real JavaScript which is great but sometimes the code could become less clear. In general, we aim to remove all the logic from our components and especially from our render method but sometimes we have to show and hide elements according to the state of the application and very often we have to loop through collections and arrays. If you feel that using JSX for that kind of operations would make your code more readable there is a Babel plugin for that: jsx-control-statements. It follows the same philosophy of JSX and it does not add any real functionality to the language, it is just a syntactic sugar that gets compiled into JavaScript. Let's see how it works. First of all, we have to install it: npm install --save jsx-control-statements Once it is installed we have to add it to the list of our babel plugins in our .babelrc file: "plugins": ["jsx-control-statements"] From now on we can use the syntax provided by the plugin and Babel will transpile it together with the common JSX syntax. A conditional statement written using the plugin looks like the following snippet: <If condition={this.canShowSecretData}> <SecretData /> </If> Which get transpiled into a ternary expression: {canShowSecretData ? <SecretData /> : null} The If component is great but if for some reasons you have nested conditions in your render method it can easily become messy and hard to follow. Here is where the Choose component comes to help: <Choose> <When condition={...}> <span>if</span> </When> <When condition={...}> <span>else if</span> </When> <Otherwise> <span>else</span> </Otherwise> </Choose>   Please notice that the code above gets transpiled into multiple ternaries. Last but not least there is a "component" (always remember that we are not talking about real components but just a syntactic sugar) to manage the loops which is very convenient as well. <ul> <For each="user" of={this.props.users}> <li>{user.name}</li> </For> </ul> The code above gets transpiled into a map function, no magic in there. If you are used to using linters, you might wonder how the linter is not complaining about that code. In fact, the variable item doesn't exist before the transpilation nor it is wrapped into a function. To avoid those linting errors there's another plugin to install: eslint-plugin-jsx-control-statements. If you did not understand the previous sentence don't worry: in the next section we will talk about linting. Sub-render It is worth stressing that we always want to keep our components very small and our render methods very clean and simple. However, that is not an easy goal, especially when you are creating an application iteratively and in the first iteration you are not sure exactly how to split the components into smaller ones. So, what should we be doing when the render method becomes big to keep it maintainable? One solution is splitting it into smaller functions in a way that let us keeping all the logic in the same component. Let's see an example: renderUserMenu() { // JSX for user menu } renderAdminMenu() { // JSX for admin menu } render() { return ( <div> <h1>Welcome back!</h1> {this.userExists&&this.renderUserMenu()} {this.userIsAdmin&&this.renderAdminMenu()} </div> ) }  This is not always considered a best practice because it seems more obvious to split the component into smaller ones but sometimes it helps just to keep the render method cleaner. For example in the Redux Real World examples a sub-render method is used to render the load more button. Now that we are JSX power user it is time to move on and see how to follow a style guide within our code to make it consistent. Summary In this article we deeply understood how JSX works and how to use it in the right way in our components. We started from the basics of the syntax to create a solid knowledge that will let us mastering JSX and its features. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Create Your First React Element [article] Getting Started [article]
Read more
  • 0
  • 0
  • 40753
article-image-random-value-generators-elm
Eduard Kyvenko
19 Dec 2016
5 min read
Save for later

Random Value Generators in Elm

Eduard Kyvenko
19 Dec 2016
5 min read
Purely functional nature of Elm leads to certain implications when used for generating random values. On the other hand, it opens up a completely new dimension for producing values of any desired shape, which is extremely useful in some cases. This article covers the core concepts for working with Random module. JavaScript offers Math.random as a way of producing random numbers; it does not expect a seed unlike traditional Pseudorandom number generator. Even though Elm is compiled to JavaScript, it does not rely on the native implementation for random numbers generation. It gives you more control by offering an API for both producing random values without explicitly specifying the seed and having the option to specify the seed explicitly and preserve its state. Both ways have tradeoffs and should be used in different situations. Random values without a Seed Before digging deeper I recommend that you look into the official Elm Guide Effects / Random, where you will find the most basic example of Random.generate. It is the easiest way to put your hands on random values. There are some significant tradeoffs you should be aware of. It relies on Time.now behind the scenes, which means you cannot guarantee efficient randomness if you run this command multiple times consecutively. In other words, there is a risk of getting the same value from running Random.generate multiple times within a short period of time. A good use case for this kind of command would be generating the seed for future, more efficient, and secure random values. I have written a little seed generator, which can be used for providing a seed for future Random.step calls: seedGenerator : Generator Seed seedGenerator = Random.int Random.minInt Random.maxInt |> Random.map (Random.initialSeed) Current time serves as a seed for Random.generate, and as you might know already, retrieving current time from the JavaScript is a side effect. Every value will arrive with a message. I will go ahead and define it; the generator will return value of the Seed type: type Msg = Update Seed init = ( { seed = Nothing } -- Initial command to create independent Seed. , Random.generate Update seedGenerator ) Storing the seed as a Maybe value makes a lot of sense since it is not present in the model at the very beginning. The initial application state will execute the generator and produce a message with a new seed, which will be accessible inside the update function: update msg model = case msg of Update seed -> -- Save newly created Seed into state. ( { model | seed = Just seed }, Cmd.none ) This concludes the initial setup for using random value generators with a Seed. As I have mentioned already, Random.generate is a not statistically reliable source of random values, therefore you should avoid relying on it too much in situations when you need multiple random values at the time. Random values with a Seed Using Random.step might be a little hard at the start. The type definition annotation for this function suggests that you will get a tuple with your newly generated value and the next seed state for future steps: Generator a -> Seed -> (a, Seed) This example application will put every new random value on a stack and display that in the DOM: I will extend the model with an additional key for saving random integers: type alias Model = { seed : Maybe Seed , stack : List Int } In the new handler for putting random values on the stack, I heavily rely on Maybe.map. It is very convenient when you want to make an impossible state impossible. In this case, I don’t want to generate any new values if the seed is missing for some reason: update msg model = case msg of Update seed -> -- Preserve newly initialized Seed state. ( { model | seed = Just seed }, Cmd.none ) PutRandomNumber -> let {- In case if seed was present, new model will contain the new value and a new state for the seed. -} newModel : Model newModel = model.seed |> Maybe.map (Random.step (Random.int 0 10)) |> Maybe.map (( number, seed ) -> { model | seed = Just seed , stack = number :: model.stack } ) |> Maybe.withDefault model in ( newModel , Cmd.none ) In short, the new branch will generate a random integer and a new seed and update the model with those new values if the seed was present. This concludes the basic example of Random.step usage, but there’s a lot more to learn. Generators You can get pretty far with Generator and define something more complex than just an integer. Let’s define a generator for producing random stats for calculating BMI: type alias Model = { seed : Maybe Seed , stack : List BMI } type alias BMI = { weight : Float , height : Float , bmi : Float } valueGenerator : Generator BMI valueGenerator = Random.map2 (w h -> BMI w h (w / (h * h))) (Random.float 60 150) (Random.float 0.6 1.2) Random.map allows using values from a passed generator and applying a function to the results, which is very convenient for making simple calculations, such as BMI: You can raise the bar with Random.andThen and produce generators based on random values. This is super useful for making combinations without repeats. Check the source of this example application on GitHub elm-examples/random-initial-seed Conclusion Elm offers a powerful abstraction for declarative definition of random value generators. Building values of any complex shape becomes quite simple by combining the power of Random.map. However, it might be a little overwhelming after JavaScript or any other imperative language. Give it a chance, maybe you will need a reliable generator for custom values in your next project! About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 3477

article-image-creating-a-simple-level-select-screen
Gareth Fouche
16 Dec 2016
6 min read
Save for later

Creating a Simple Level Select Screen

Gareth Fouche
16 Dec 2016
6 min read
For many types of games, whether multiplayer FPS or 2D platformer, it is desirable to present the player with a list of levels they can play. Sonic the Hedgehog © Sega This tutorial will guide you through creating a simple level select screen in Unity. For the first step, we need to create some simple test levels for the player to select from. From the menu, select File | New Scene to create a new scene with a main Camera and Directional Light. Then, from the hierarchy view, select Create | 3D Object | Plane to place a ground plane. Select Create | 3D Object | Cube to place a cube in the scene. Copy and paste that cube a few times, arranging the cubes on the plane and positioning the camera until you have a basic level layout as follows: Save this Scene as “CubeWorld”, our 1st level. Create another scene and repeat the above process, but instead of cubes, place spheres. Save this scene as “SphereWorld”, our second game level: We will need preview images of each level for our level select screen. Take a screenshot of each scene, open up any image editor, paste your image, and resize/crop that image until it is 400 x 180 pixels. Do this for both the levels, save them as “CubeWorld.jpg” and “SphereWorld.jpg”, and then pull those images into your project. In the import settings, make sure to set the Texture Type for the images to Sprite (2D and UI): Now that we have the game levels, it’s time to create the Level Select scene. As before, create a new empty scene and name it “LevelSelectMenu”. This time, select Create | UI | Canvas. This will create the canvas object that is the root of our GUI. In an image editor, create a small 10 x 10 pixel image, fill it with black and save it as “Background.jpg”. Drag it into the project, setting its image settings, as before, to Sprite (2D and UI). Now, from the Create | UI menu, create an Image. Drag “Background.jpg” from the Project pane into the Image component’s Source Image field. Set the Width and the Height to 2000 pixels, this should be enough to cover the entire canvas. From the same UI menu, create a Text component. In the inspector, set the Width and the Height of that Text to 300 x 80 pixels. Under the Text property, enter “Select Level”, and then set the Font Size to 50 and Color to white: Using the transform controls, drag the text to the upper middle area of the screen. If you can’t see the Text, make sure it is positioned below the Image under Canvas in the Hierarchy view. Order matters; the top most child of Canvas will be rendered first, then the second, and so on. So, make sure your background image isn’t being drawn over your Text. Now, from the UI menu, create a Button again. Make this button 400 pixels wide and 180 pixels high. Drag “CubeWorld.jpg” into the Image component’s Source Image field from the Project pane. This will make it the button image: Edit the button Text to say “Cube World” and set the Font Size to 30. Change the Font Colour to be white. Now, in the inspector view, reposition the text to the bottom left corner of the button using the transform controls: Update the Button’s Color values as in the image below. These values will tint the button image in certain states. Normal is the default, Highlighted is for when the mouse is over the button, Pressed is for when the button is pressed, Disabled is for when the button is not interactable (the Interactable checkbox is unticked): Now duplicate the first button, but this time, use the “SphereWorld.jpg” image as the Source Image, and set the text to “Sphere World”. Using the transform controls, position the two buttons next to each other under the “Select Level” text on the canvas. It should look like this: If we run the app now, we’ll see this screen and be able to click on each level button, but nothing will happen. To actually load a level, we need to first create a new script. Right click in the Project view and select Create | C# Script. Name this script “LevelSelect”. Create a new GameObject in the scene, rename it “LevelSelectManager”, and drag the LevelSelect script onto that GameObject in the Hierarchy. Now, open up the script in an IDE and change the code to be as follows: What this script does is define a script, LevelSelect, which exposes a single function, LoadLevel(). LoadLevel() takes a string, that is, the level name and tells Unity to load that level (a Unity scene) by calling SceneManager.LoadScene(). However, we still need to actually call this function when the buttons are pressed. Back in Unity, go back to the CubeWorld button in the Hierarchy. Under the Button Script in the Inspector, there is an entry for “On Click ()” with a plus sign under it: Click the plus sign to add the event that will be called when the button is clicked. Once the event is added, we need to fill out the details that tell it which function to call on what scene GameObject. Find where it says “None (Object)” under “On Click ()”. Drag the “LevelSelectManager” GameObject from the Hierarchy view into that field. Then click the “No Function” dropdown, which will display a list of component classes matching the components on our “LevelSelectManager” GameObject. Choose “LevelSelect” (because that’s the script class our function is defined in) and then “LoadLevel (string)” to choose the function we wrote in C# previously. Now we just have to pass the level name string we want to load to that function. To do that, write “CubeWorld” (the name of the scene/level we want to load) in the empty field text box. Once you’re done, the “On Click ()” event should look like this: Now, repeat the process for the SphereWorld button as above, but instead of entering “CubeWorld” as the string to pass the LoadLevel function, enter “SphereWorld”. Almost done! Finally, save the “LevelSelectMenu” scene, and then click File | Build Settings. Make sure that all the three scenes are loaded into the “Scenes In Build” list. If they aren’t, drag the scenes into the list from the Project pane. Make sure that the “LevelSelect” scene is first so that when the app is run, it is the scene that will be loaded up first: It’s time to build and run your program! You should be greeted by the Level Select Menu, and, depending on which level you select, it’ll load the appropriate game level, either CubeWorld or SphereWorld. Now you can customize it further, adding more levels, making the level select screen look nicer with better graphical assets and effects, and, of course, adding actual gameplay to your levels. Have fun! About the author Gareth Fouche is a game developer. He can be found on Github @GarethNN.
Read more
  • 0
  • 2
  • 14775

article-image-storing-records-and-interface-customization
Packt
16 Dec 2016
18 min read
Save for later

Storing Records and Interface customization

Packt
16 Dec 2016
18 min read
 In this article by Paul Goody, the author of the book Salesforce CRM - The Definitive Admin Handbook - Fourth Edition, we will describe in detail the Salesforce CRM record storage features and user interface that can be customized, such as objects, fields, and page layouts. In addition, we will see an overview of the relationship that exists between the profile and these customizable features that the profile controls. This article looks at the methods to configure and tailor the application to suit the way your company information can be best represented within the Salesforce CRM application. We will look at the mechanisms to store data in Salesforce and the concepts of objects and fields. The features that allow this data to be grouped and arranged presented within the application are then considered by looking at apps, tabs, page layouts, and record types. Finally, we will take a look at some of the features that allow views of data to be presented and customized by looking in detail at related types, related lists, and list views. Finally, you will be presented with a number of questions about the key features of Salesforce CRM administration in the area of Standard and Custom Objects, which are covered in this article. We will cover the following topics in this article: Objects Fields Object relationships Apps Tabs Renaming labels for standard tabs, standard objects, and standard fields Creating custom objects Object limits Creating custom object relationships Creating custom fields Dependent picklists Building relationship fields Lookup relationship options Master detail relationship options Lookup filters Building formulas Basic formula Advanced formula Building formulas--best practices Building formula text and compiled character size limits Custom field governance Page layouts Freed-based page layouts Record types Related lists (For more resources related to this topic, see here.) The relationship between a profile and the features that it controls The following diagram describes the relationship that exists between a profile and the features that it controls: The profile is used to: Control access to the type of license specified for the user and any login hours or IP address restrictions that are set. Control access to objects and records using the role and sharing model. If the appropriate object-level permission is not set on the user's profile, then the user will be unable to gain access to the records of that object type in the application. In this article, we will look at the configurable elements that are set in conjunction with a profile. These are used to control the structure and the user interface for the Salesforce CRM application. Objects Objects are a key element in Salesforce CRM as they provide a structure to store data and are incorporated in the interface, allowing users to interact with the data. Similar in nature to a database table, objects have the following properties: Fields, which are similar in concept to a database column Records, which are similar in concept to a database row Relationships with other objects Optional tabs, which are user-interface components to display the object data Standard objects Salesforce provides standard objects in the application when you sign up; these include Account, Contact, Opportunity, and so on. These are the tables that contain the data records in any standard tab, such as Accounts, Contacts, and Opportunities. In addition to the standard objects, you can create custom objects and tabs. Custom objects Custom objects are the tables you create to store your data. You can create a custom object to store data specific to your organization. Once you have the custom objects, and have created records for these objects, you can also create reports and dashboards based on the record data in your custom object. Fields Fields in Salesforce are similar in concept to a database column: they store the data for the object records. An object record is analogous to a row in a database table. Standard fields Standard fields are predefined fields that are included as standard within the Salesforce CRM application. Standard fields cannot be deleted but non-required standard fields can be removed from page layouts, whenever necessary. With standard fields, you can customize visual elements that are associated to the field, such as field labels and field-level help, as well as certain data definitions, such as picklist values, the formatting of auto-number fields (which are used as unique identifiers for the records), and setting of field history tracking. Some aspects, however, such as the field name, cannot be customized, and some standard fields, such as Opportunity Probability, do not allow the changing of the field label. Custom fields Custom fields are unique to your business needs and can not only be added and amended, but also deleted. Creating custom fields allow you to store the information that is necessary for your organization. Both standard and custom fields can be customized to include custom help text to help users understand how to use the field as shown in following screenshot Object relationships Object relationships can be set on both standard and custom objects and are used to define how records in one object relate to records in another object. Accounts, for example, can have a one-to-many relationship with opportunities; these relationships are presented in the application as related lists. Apps An app in Salesforce is a container for all the objects, tabs, processes, and services associated with a business function. There are standard and custom apps that are accessed using the App menu located at the top-right corner of the Salesforce page, as shown in the following screenshot: When users select an app from the App menu, their screen changes to present the objects associated with that app. For example, when switching from an app that contains the Campaign tab to one that does not, the Campaign tab no longer appears. This feature is applied to both standard and custom apps. Standard apps Salesforce provides standard apps such as Call Center, Community, Content, MarketingSales, Salesforce Chatter, and Site.com. Custom apps A custom app can optionally include a custom logo. Both standard and custom apps consist of a name, a description, and an ordered list of tabs. Subtab apps A subtab app is used to specify the tabs that appear on the Chatter profile page. Subtab apps can include both default and custom tabs that you can set. Tabs A tab is a user-interface element that, when clicked, displays the record data on a page specific to that object. Hiding and showing tabs To customize your personal tab settings, navigate to Setup | My Personal Settings | Change My Display | Customize My Tabs. Now, choose the tabs that will display in each of your apps by moving the tab name between the Available Tabs and the Selected Tabs sections and click on Save. The following screenshot shows the section of tabs for the Sales app: To customize the tab settings of your users, navigate to Setup | Manage Users | Profiles. Now, select a profile and click on Edit. Scroll down to the Tab Settings section of the page, as shown in the following screenshot: Standard tabs Salesforce provides tabs for each of the standard objects that are provided in the application when you sign up. For example, there are standard tabs for Accounts, Contacts, Opportunities, and so on: Visibility of the tab depends on the setting on the Tab Display setting for the app. Custom tabs You can create three different types of custom tabs: Custom Object Tabs, Web Tabs, and Visualforce Tabs. Custom Object Tabs allow you to create, read, update, and delete the data records in your custom objects. Web Tabs display any web URL in a tab within your Salesforce application. Visualforce Tabs display custom user-interface pages created using Visualforce. Creating custom tabs: The text displayed on the Custom tab is set using the Plural Label of the custom object, which is entered when creating the custom object. If the tab text needs to be changed, this can be done by changing the Plural Label stored in the custom object. Salesforce.com recommends selecting the Append tab to a user’s existing personal customization checkbox. This benefits your users as they will automatically be presented with the new tab and can immediately access the corresponding functionality without having to first customize their personal settings themselves. It is recommended that you do not show tabs by setting appropriate permissions so that the users in your organization cannot see any of your changes until you are ready to make them available. You can create up to 25 custom tabs in the Enterprise Edition, and as many as you require in the Unlimited and Performance Editions. To create custom tabs for a custom object, navigate to Setup | Create | Tabs. Now, select the appropriate tab type and/or object from the available selections, as shown in the following screenshot: Renaming labels for standard tabs, standard objects, and standard fields Labels generally reflect the text that is displayed and presented to your users in the user interface and in reports within the Salesforce application. You can change the display labels of standard tabs, objects, fields, and other related user interface labels so they can reflect your company's terminology and business requirements better. For example, the Accounts tab and object can be changed to Clients; similarly, Opportunities to Deals, and Leads to Prospects. Once changed, the new label is displayed on all user pages. The Setup Pages and Setup Menu sections cannot be modified and do not include any renamed labels and continue. Here, the standard tab, object, and field reference continues to use the default, original labels. Also, the standard report names and views continue to use the default labels and are not renamed. To change standard tab, objects, and field labels, navigate to Setup | Customize | Tabs Names and Labels | Rename Tabs and Labels. Now, select a language, and then click on Edit to modify the tab names and standard field labels, as shown in the following screenshot: Click on Edit to select the tab that you wish to rename. Although the screen indicates that this is a change for the tab's name, this selection will also allow you to change the labels for the object and fields, in addition to the tab name. To change field labels, click through to step 2. Enter the new field labels. Here, we will rename Accounts tab to Clients. Enter the Singular and Plural names and then click on Next, as shown in the following screenshot: Only the following standard tabs and objects can be renamed: Accounts, Activities, Articles, Assets, Campaigns, Cases, Contacts, Contracts, Documents, Events, Ideas, Leads, Libraries, Opportunities, Opportunity Products, Partners, Price Books, Products, Quote Line Items, Quotes, Solutions, Tasks. Tabs such as HomeChatter, Forecasts, Reports, and Dashboards cannot be renamed. The following screenshot shows Standard Field available: Salesforce looks for the occurrence of the Account label and displays an auto-populated screen showing where the Account text will be replaced with Client. This auto-population of text is carried out for the standard tab, the standard object, and the standard fields. Review the replaced text, amend as necessary, and then click on Save, as shown in the following screenshot: After renaming, the new labels are automatically displayed on the tab, in reports, in dashboards, and so on. Some standard fields, such as Created By and Last Modified, are prevented from being renamed because they are audit fields that are used to track system information. You will, however, need to carry out the following additional steps to ensure the consistent renaming throughout the system as these may need manual updates: Check all list view names as they do not automatically update and will continue to show the original object name until you change them manually. Review standard report names and descriptions for any object that you have renamed. Check the titles and descriptions of any e-mail templates that contain the original object or field name, and update them as necessary. Review any other items that you have customized with the standard object or field name. For example, custom fields, page layouts, and record types may include the original tab or field name text that is no longer relevant. If you have renamed tabs, objects, or fields, you can also replace the Salesforce online help with a different URL. Your users can view this replaced URL whenever they click on any context-sensitive help link on an end-user page or from within their personal setup options. Creating custom objects Custom objects are database tables that allow you to store data specific to your organization on salesforce.com. You can use custom objects to extend Salesforce functionality or to build new application functionality. You can create up to 200 custom objects in the Enterprise Edition and 2000 in the Unlimited Edition. Once you have created a custom object, you can create a custom tab, custom-related lists, reports, and dashboards for users to interact with the custom object data. To create a custom object, navigate to Setup | Create | Objects. Now click on New Custom Object, or click on Edit to modify an existing custom object. The following screenshot shows the resulting screen:   On the Custom Object Definition Edit page, you can enter the following: Label: This is the visible name that is displayed for the object within the Salesforce CRM user interface and shown on pages, views, and reports, for example. Plural Label: This is the plural name specified for the object, which is used within the application in places such as reports and on tabs (if you create a tab for the object). Gender (language dependent): This field appears if your organization-wide default language expects gender. This is used for organizations where the default language settings are, for example, Spanish, French, Italian, and German, among many others. Your personal language preference setting does not affect whether the field appears or not. For example, if your organization's default language is English, but your personal language is French, you will not be prompted for gender when creating a custom object. Starts with a vowel sound: Use of this setting depends on your organization's default language and is a linguistic check to allow you to specify whether your label is to be preceded by an instead of a; for example, resulting in reference to the object as an Order instead of a Order. Object Name: This is a unique name used to refer to the object. Here, the Object Name field must be unique and can only contain underscores and alphanumeric characters. It must also begin with a letter, not contain spaces or two consecutive underscores, and not end with an underscore. Description: This is an optional description of the object. A meaningful description will help you explain the purpose of your custom objects when you are viewing them in a list. Context-Sensitive Help Setting: This defines what information is displayed when your users click on the Help for this Page context-sensitive help link from the custom object record home (overview), edit, and detail pages, as well as list views and related lists. The Help & Training link at the top of any page is not affected by this setting; it always opens Salesforce Help & Training window. Record Name: This is the name that is used in areas such as page layouts, search results, key lists, and related lists, as shown next. Data Type: This sets the type of field for the record name. Here, the data type can be either text or auto-number. If the data type is set to be Text, then when a record is created, users must enter a text value, which does not need to be unique. If the data type is set to be Auto Number, it becomes a read-only field, whereby new records are automatically assigned a unique number, as shown in the following screenshot: Display Format: This option, as shown in the preceding example, only appears when the Data Type field is set to Auto Number. It allows you to specify the structure and appearance of the Auto Number field. For example, {YYYY}{MM}-{000} is a display format that produces a four-digit year and a two-digit month prefix to a number with leading zeros padded to three digits. Example data output would include: 201203-001, 201203-066, 201203-999, 201203-1234. It is worth noting that although you can specify the number to be three digits, if the number of records created becomes over 999, the record will still be saved but the automatically incremented number becomes 1000, 1001, and so on. Starting Number: As described, Auto Number fields in Salesforce CRM are automatically incremented for each new record. Here, you must enter the starting number for the incremental count, which does not have to be set to start from one. Allow Reports: This setting is required if you want to include the record data from the custom object in any report or dashboard analytics. When a custom object has a relationship field associating it to a standard object, a new Report Type may appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object by selecting the standard object for the report type category instead of the custom object. Such relationships can be either a lookup or a master-detail. A new Report Type is created in the standard report category if the custom object is either the lookup object on the standard object or the custom object has a master-detail relationship with the standard object. Lookup relationships create a relationship between two records so that you can associate them with each other. Moreover, the master-detail relationship fields created are described in more detail later in this section. relationship between records where the master record controls certain behaviors of the detail record such as record deletion and security. When the custom object has a master-detail relationship with a standard object or is a lookup object on a standard object, a new report type will appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object, which is done by selecting the standard object for the report type category instead of the custom object. Allow Activities: This allows users to include tasks and events related to the custom object records, which appear as a related list on the custom object page. Track Field History: This enables the tracking of data-field changes on the custom object records, such as who changed the value of a field and when it was changed. Fields history tracking also stores the value of the field before and after the fields edit. This feature is useful for auditing and data-quality measurement, and is also available within the reporting tools. The field history data is retained for up to 18 months, and you can set field history tracking for a maximum of 20 fields for Enterprise, Unlimited, and Performance Editions. Allow in Chatter Groups: This setting allows your users to add records of this custom object type to Chatter groups. When enabled, records of this object type that are created using the group publisher are associated with the group, and also appear in the group record list. When disabled, records of this object type that are created using the group publisher are not associated with the group. Deployment Status: This indicates whether the custom object is now visible and available for use by other users. This is useful as you can easily set the status to In Development until you are happy for users to start working with the new object. Add Notes & Attachments: This setting allows your users to record notes and attach files to the custom object records. When this is specified, a related list with the New Note and Attach File buttons automatically appears on the custom object record page where your users can enter notes and attach documents. The Add Notes & Attachments option is only available when you create a new object. Launch the New Custom Tab Wizard: This starts the custom tab wizard after you save the custom object. The New Custom Tab Wizard option is only available when you create a new object. If you do not select Launch the New Custom Tab Wizard, you will not be able to create a tab in this step; however, you can create the tab later, as described in the section Custom tabs covered earlier in this article. When creating a custom object, a custom tab is not automatically created. Summary This article, thus, describes in detail the Salesforce CRM record storage features and user interface that can be customized, the mechanism to store data in Salesforce CRM, the relationship that exists between the profile, and the customizable features that the profile controls. Resources for Article: Further resources on this subject: Introducing Dynamics CRM [article] Understanding CRM Extendibility Architecture [article] Getting Dynamics CRM 2015 Data into Power BI [article]
Read more
  • 0
  • 0
  • 1833
article-image-r-and-its-diverse-possibilities
Packt
16 Dec 2016
11 min read
Save for later

R and its Diverse Possibilities

Packt
16 Dec 2016
11 min read
In this article by Jen Stirrup, the author of the book Advanced Analytics with R and Tableau, We will cover, with examples, the core essentials of R programming such as variables and data structures in R such as matrices, factors, vectors, and data frames. We will also focus on control mechanisms in R ( relational operators, logical operators, conditional statements, loops, functions, and apply) and how to execute these commands in R to get grips before proceeding to article that heavily rely on these concepts for scripting complex analytical operations. (For more resources related to this topic, see here.) Core essentials of R programming One of the reasons for R’s success is its use of variables. Variables are used in all aspects of R programming. For example, variables can hold data, strings to access a database, whole models, queries, and test results. Variables are a key part of the modeling process, and their selection has a fundamental impact on the usefulness of the models. Therefore, variables are an important place to start since they are at the heart of R programming. Variables In the following section we will deal with the variables—how to create variables and working with variables. Creating variables It is very simple to create variables in R, and to save values in them. To create a variable, you simply need to give the variable a name, and assign a value to it. In many other languages, such as SQL, it’s necessary to specify the type of value that the variable will hold. So, for example, if the variable is designed to hold an integer or a string, then this is specified at the point at which the variable is created. Unlike other programming languages, such as SQL, R does not require that you specify the type of the variable before it is created. Instead, R works out the type for itself, by looking at the data that is assigned to the variable. In R, we assign variables using an assignment variable, which is a less than sign (<) followed by a hyphen (-). Put together, the assignment variable looks like so: Working with variables It is important to understand what is contained in the variables. It is easy to check the content of the variables using the lscommand. If you need more details of the variables, then the ls.strcommand will provide you with more information. If you need to remove variables, then you can use the rm function. Data structures in R The power of R resides in its ability to analyze data, and this ability is largely derived from its powerful data types. Fundamentally, R is a vectorized programming language. Data structures in R are constructed from vectors that are foundational. This means that R’s operations are optimized to work with vectors. Vector The vector is a core component of R. It is a fundamental data type. Essentially, a vector is a data structure that contains an array where all of the values are the same type. For example, they could all be strings, or numbers. However, note that vectors cannot contain mixed data types. R uses the c() function to take a list of items and turns them into a vector. Lists R contains two types of lists: a basic list, and a named list. A basic list is created using the list() operator. In a named list, every item in the list has a name as well as a value. named lists are a good mapping structure to help map data between R and Tableau. In R, lists are mapped using the $ operator. Note, however, that the list label operators are case sensitive. Matrices Matrices are two-dimensional structures that have rows and columns. The matrices are lists of rows. It’s important to note that every cell in a matrix has the same type. Factors A factor is a list of all possible values of a variable in a string format. It is a special string type, which is chosen from a specified set of values known as levels. They are sometimes known as categorical variables. In dimensional modeling terminology, a factor is equivalent to a dimension, and the levels represent different attributes of the dimension. Note that factors are variables that can only contain a limited number of different values. Data frames The data frame is the main data structure in R. It’s possible to envisage the data frame as a table of data, with rows and columns. Unlike the list structure, the data frame can contain different types of data. In R, we use the data.frame() command in order to create a data frame. The data frame is extremely flexible for working with structured data, and it can ingest data from many different data types. Two main ways to ingest data into data frames involves the use of many data connectors, which connect to data sources such as databases, for example. There is also a command, read.table(), which takes in data. Data Frame Structure Here is an example, populated data frame. There are three columns, and two rows. The top of the data frame is the header. Each horizontal line afterwards holds a data row. This starts with the name of the row, and then followed by the data itself. Each data member of a row is called a cell. Here is an example data frame, populated with data: Example Data Frame Structure df = data.frame( Year=c(2013, 2013, 2013), Country=c("Arab World","Carribean States", "Central Europe"), LifeExpectancy=c(71, 72, 76)) As always, we should read out at least some of the data frame so we can double-check that it was set correctly. The data frame was set to the df variable, so we can read out the contents by simply typing in the variable name at the command prompt: To obtain the data held in a cell, we enter the row and column co-ordinates of the cell, and surround them by square brackets []. In this example, if we wanted to obtain the value of the second cell in the second row, then we would use the following: df[2, "Country"] We can also conduct summary statistics on our data frame. For example, if we use the following command: summary(df) Then we obtain the summary statistics of the data. The example output is as follows: You’ll notice that the summary command has summarized different values for each of the columns. It has identified Year as an integer, and produced the min, quartiles, mean, and max for year. The Country column has been listed, simply because it does not contain any numeric values. Life Expectancy is summarized correctly. We can change the Year column to a factor, using the following command: df$Year <- as.factor(df$Year) Then, we can rerun the summary command again: summary(df) On this occasion, the data frame now returns the correct results that we expect: As we proceed throughout this book, we will be building on more useful features that will help us to analyze data using data structures, and visualize the data in interesting ways using R. Control structures in R R has the appearance of a procedural programming language. However, it is built on another language, known as S. S leans towards functional programming. It also has some object-oriented characteristics. This means that there are many complexities in the way that R works. In this section, we will look at some of the fundamental building blocks that make up key control structures in R, and then we will move onto looping and vectorized operations. Logical operators Logical operators are binary operators that allow the comparison of values: Operator Description <  less than <= less than or equal to >  greater than >= greater than or equal to == exactly equal to != not equal to !x Not x x | y x OR y x & y x AND y isTRUE(x) test if X is TRUE For loops and vectorization in R Specifically, we will look at the constructs involved in loops. Note, however, that it is more efficient to use vectorized operations rather than loops, because R is vector-based. We investigate loops here, because they are a good first step in understanding how R works, and then we can optimize this understanding by focusing on vectorized alternatives that are more efficient. More information about control flows can be obtained by executing the command at the command line: Help?Control The control flow commands take decisions and make decisions between alternative actions. The main constructs are for, while, and repeat. For loops Let’s look at a for loop in more detail. For this exercise, we will use the Fisher iris dataset, which is installed along with R by default. We are going to produce summary statistics for each species of iris in the dataset. You can see some of the iris data by typing in the following command at the command prompt: head(iris) We can divide the iris dataset so that the data is split by species. To do this, we use the split command, and we assign it to the variable called IrisBySpecies: IrisBySpecies <- split(iris,iris$Species) Now, we can use a for loop in order to process the data in order to summarize it by species. Firstly, we will set up a variable called output, and set it to a list type. For each species held in the IrisBySpecies variable, we set it to calculate the minimum, maximum, mean, and total cases. It is then set to a data frame called output.df, which is printed out to the screen: output <- list() for(n in names(IrisBySpecies)){ ListData <- IrisBySpecies[[n]] output[[n]] <- data.frame(species=n, MinPetalLength=min(ListData$Petal.Length), MaxPetalLength=max(ListData$Petal.Length), MeanPetalLength=mean(ListData$Petal.Length), NumberofSamples=nrow(ListData)) output.df <- do.call(rbind,output) } print(output.df) The output is as follows: We used a for loop here, but they can be expensive in terms of processing. We can achieve the same end by using a function that uses a vector called Tapply. Tapply processes data in groups. Tapply has three parameters; the vector of data, the factor that defines the group, and a function. It works by extracting the group, and then applying the function to each of the groups. Then, it returns a vector with the results. We can see an example of tapply here, using the same dataset: output <- data.frame(MinPetalLength=tapply(iris$Petal.Length,iris$Species,min), MaxPetalLength=tapply(iris$Petal.Length,iris$Species,max), MeanPetalLength=tapply(iris$Petal.Length,iris$Species,mean), NumberofSamples=tapply(iris$Petal.Length,iris$Species,length)) print(output) This time, we get the same output as previously. The only difference is that by using a vectorized function, we have concise code that runs efficiently. To summarize, R is extremely flexible and it’s possible to achieve the same objective in a number of different ways. As we move forward through this book, we will make recommendations about the optimal method to select, and the reasons for the recommendation. Functions R has many functions that are included as part of the installation. In the first instance, let’s look to see how we can work smart by finding out what functions are available by default. In our last example, we used the split() function. To find out more about the split function, we can simply use the following command: ?split Or we can use: help(split) It’s possible to get an overview of the arguments required for a function. To do this, simply use the args command: args(split) Fortunately, it’s also possible to see examples of each function by using the following command: example(split) If you need more information than the documented help file about each function, you can use the following command. It will go and search through all the documentation for instances of the keyword: help.search("split") If you  want to search the R project site from within RStudio, you can use the RSiteSearch command. For example: RSiteSearch("split") Summary In this article, we have looked at various essential structures in working with R. We have looked at the data structures that are fundamental to using R optimally. We have also taken the view that structures such as for loops can often be done better as vectorized operations. Finally, we have looked at the ways in which R can be used to create functions in order to simply code. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Creating your first heat map in R [article] Data Modelling Challenges [article]
Read more
  • 0
  • 0
  • 2982

article-image-gathering-and-analyzing-stock-market-data-r-part-2
Erik Kappelman
15 Dec 2016
8 min read
Save for later

Gathering and analyzing stock market data with R, Part 2

Erik Kappelman
15 Dec 2016
8 min read
Welcome to the second installment of this series. The previous post covered collecting real-time stock market data using R. This second part looks at a few ways to analyze historical stock market data using R. If you are just interested in learning how to analyze historical data, the first blog isn’t necessary. The code accompanying these blogs is located here. To begin, we must first get some data. The lines of code below load the ‘quantmod’ library, a very useful R library when it comes to financial analysis, and then use quantmod to gather data on the list of stock symbols: library(quantmod) syms<-read.table("NYSE.txt",header=2,sep="t") smb<-grep("[A-Z]{4}",syms$Symbol,perl= F, value = T) getSymbols(smb) I find the getSymbols() function somewhat problematic for gathering data on multiple companies. This is because the function creates a separate dataframe for each company in the package’s ‘xts’ format. I think this would be more helpful if you were planning to use the quantmod tools for analysis. I enjoy using other types of tools, so the data needs to be changed somewhat, before I can analyze it: mat<- c() stocks<- c() stockList<- list() names<- c() for(i in1:length(smb)){ temp<-get(smb[i]) names<- c(names,smb[i]) stockList[[i]]<-as.numeric(getPrice(temp)) len<- length(attributes(temp)$index) if(len<ten01)next stocks<- c(stocks,smb[i]) temp2 <-temp[(len-ten00):len] vex<-as.numeric(getPrice(temp2)) mat<-rbind(mat,vex) } The code above loops through the dataframes that were created by the getSymbols() function. Using the get() function from the ‘base’ package, each symbol string is used to grab the symbol’s dataframe. The loop then does one or two more things to each stock’s dataframe. For all of the stocks, it records the stock’s symbol in a vector and adds a vector of prices to the growing list of stock data. If the stock data goes back at least one thousand trading days, then the last one thousand days of trading are added to a matrix. The reason for this distinction is that we will be looking at two methods of analysis. One requires all of the series to be the same length, and the other is length-agnostic. Series that are too short will not be analyzed using the first method. Check out the following script: names(stockList)<- names stock.mat<-as.matrix(mat) row.names(stock.mat)<- stocks colnames(stock.mat)<-as.character(index(temp2)) save(stock.mat,stockList,file="StockData.rda") rm(list =ls()) The above script names the data properly and saves the data to an R data file. The final line of code cleans the workspace because the getSymbols() function leaves quite a mess. The data is now in the correct format for us to begin our analysis. It is worth pointing out that what I am about to show won’t get you an A in most statistics or economics classes. I say this because I am going to take a very practical approach with little regard to the proper assumptions. Although these assumptions are important, when the need is an accurate forecast, it is easier to get away with models that are not entirely theoretically sound. This is because we are not trying to make arguments of causality or association; we are trying to guess the direction of the market: library(mclust) library(vars) load("StockData.rda") In this first example of analysis, I put forth a clustering-based Vector Autoregression (VAR) method of my own design. In order to do this, we must load the correct packages and load the data we just created: cl<-Mclust(stock.mat,G=1:9) stock.mat<-cbind(stock.mat,cl$classification) The first thing to do is identify the clusters that exist within the stock market data. In this case, we use a model-based clustering method. This method assumes that data is the result of picks from a set of random variable distributions. This allows the clusters to be based on the covariance of companies’ stock prices instead of just grouping together companies with similar nominal prices. The Mclust() function fits a model to the data that minimizes the Bayesian Information Criterion (BIC). You will likely have to restrict the number of clusters as one complaint about model-based clustering is a ‘more clusters are always better’ result. The data is separated into clusters to make using a VAR technique more computationally realistic. One of the nice things about VAR is how few assumptions must be met in order to include the time series in an analysis. Also, VAR regresses several time series against one another and themselves at the same time, which may capture more of the covariance needed to produce reliable forecasts. We are looking at over 1000 time series and this is too many to use VAR effectively, so the clustering is used to group the time series together to produce smaller VARs: cluster<-stock.mat[stock.mat[,ten02]==6,1:ten01] ts<-ts(t(cluster)) fit <-VAR(ts[1:(ten01-ten),],p =ten) preds<- predict(fit,n.ahead=ten) forecast<-preds$fcst$TEVA plot.ts(ts[950:ten01,8],ylim=c(36,54)) lines(y = forecast[,1], x =(50-9):50,col ="blue") lines(y = forecast[,2], x =(50-9):50,col ="red",lty=2) lines(y = forecast[,3], x =(50-9):50,col ="red",lty=2) The code above takes the time series that belong to the ‘6’ cluster and runs a VAR that looks back ten steps. We cut off the last ten days of data and use the VAR to predict these last ten days. The script then plots the predicted ten days against the actual ten days. This allows us to see if the predictions are functioning properly. The resulting plot shows that the predictions are not perfect but will probably work well enough: for(i in1:8){ assign(paste0("cluster.",i),stock.mat[stock.mat[,ten02]== i,1:ten01]) assign(paste0("ts.",i),ts(t(get(paste0("cluster.",i))))) temp<-get(paste0("ts.",i)) assign(paste0("fit.",i),VAR(temp, p =ten)) assign(paste0("preds.",i),predict(get(paste0("fit.",i)),n.ahead=ten)) } stock.mat<-cbind(stock.mat,0) for(j in1:8){ pred.vec<-c() temp<-get(paste0("preds.",j)) for(i intemp$fcst){ cast<-temp$fcst[1] cast<- cast[[1]] cast<- cast[ten,] pred.vec<-c(pred.vec,cast[1]) } stock.mat[stock.mat[,ten02]== j,ten03]<-pred.vec } The loops above perform a VAR on each of the 8 clusters with more than one member. After these VARs are performed, a ten-day forecast is carried out. The value of each stock at the end of the ten-day forecast is then appended onto the end of the stock data matrix: stock.mat<-stock.mat[stock.mat[,ten02]!=9,] stock.mat[,ten04]<-(stock.mat[,ten03]-stock.mat[,ten01])/stock.mat[,ten01]*ten0 stock.mat<-stock.mat[order(-stock.mat[,ten04]),] stock.mat[1:ten,ten04] rm(list =ls()) The final lines of code calculate the percentage change in each stock forecasted after 10days and then display the top 10stocks in terms of forecasted percentage change. The workspace is then cleared: load("StockData.rda") library(forecast) forecasts<- c() names<- c() for(i in1:length(stockList)){ mod<-auto.arima(stockList[[i]]) cast<- forecast(mod) cast<-cast$mean[ten] temp<- c(as.numeric(stockList[[i]][length(stockList[[i]])]),as.numeric(cast)) forecasts<-rbind(forecasts,temp) names<- c(names,names(stockList[i])) } forecasts<- matrix(ncol=2,forecasts) forecasts<-cbind(forecasts,(forecasts[,2]- forecasts[,1])/forecasts[,1]*ten0) colnames(forecasts)<- c("Price","Forecast","% Change") row.names(forecasts)<- names forecasts<- forecasts[order(-forecasts[,3]),] rm(list =ls()) The final bit of code is simpler. Using the ‘forecast’ package’s auto.arima() function,we fit an ARIMA model to each stock in our stockList. The auto.arima() function is a must-have for forecasters using R. This function fits an ARIMA model to your data with the best value of some measure of statistical accuracy. The default is the corrected Akaike Information Criterion (ACCc), which will work fine for our purposes. Once the forecasts are complete, this script also prints the top 10stocks in terms of percentage change over a ten-day forecast. These blogs have discussed how to gather and analyze stock market data using R. I hope they have been informative and will help you with data analysis in the future. About the author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 4397
Modal Close icon
Modal Close icon