Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-finishing-attack-report-and-withdraw
Packt
21 Dec 2016
11 min read
Save for later

Finishing the Attack: Report and Withdraw

Packt
21 Dec 2016
11 min read
In this article by Michael McPhee and Jason Beltrame, the author of the book Penetration Testing with Raspberry Pi - Second Edition we will look at the final stage of the Penetration Testing Kill Chain, which is Reporting and Withdrawing. Some may argue the validity and importance of this step, since much of the hard-hitting effort and impact. But, without properly cleaning up and covering our tracks, we can leave little breadcrumbs with can notify others to where we have been and also what we have done. This article covers the following topics: Covering our tracks Masking our network footprint (For more resources related to this topic, see here.) Covering our tracks One of the key tasks in which penetration testers as well as criminals tend to fail is cleaning up after they breach a system. Forensic evidence can be anything from the digital network footprint (the IP address, type of network traffic seen on the wire, and so on) to the logs on a compromised endpoint. There is also evidence on the used tools, such as those used when using a Raspberry Pi to do something malicious. An example is running more ~/.bash_history on a Raspberry Pi to see the entire history of the commands that were used. The good news for Raspberry Pi hackers is that they don't have to worry about storage elements such as ROM since the only storage to consider is the microSD card. This means attackers just need to re-flash the microSD card to erase evidence that the Raspberry Pi was used. Before doing that, let's work our way through the clean up process starting from the compromised system to the last step of reimaging our Raspberry Pi. Wiping logs The first step we should perform to cover our tracks is clean any event logs from the compromised system that we accessed. For Windows systems, we can use a tool within Metasploit called Clearev that does this for us in an automated fashion. Clearev is designed to access a Windows system and wipe the logs. An overzealous administrator might notice the changes when we clean the logs. However, most administrators will never notice the changes. Also, since the logs are wiped, the worst that could happen is that an administrator might identify that their systems have been breached, but the logs containing our access information would have been removed. Clearev comes with the Metasploit arsenal. To use clearev once we have breached a Windows system with a Meterpreter, type meterpreter > clearev. There is no further configuration, which means clearev just wipes the logs upon execution. The following screenshot shows what that will look like: Here is an example of the logs before they are wiped on a Windows system: Another way to wipe off logs from a compromised Windows system is by installing a Windows log cleaning program. There are many options available to download, such as ClearLogs found at http://ntsecurity.nu/toolbox/clearlogs/. Programs such as these are simple to use, we can just install and run it on a target once we are finished with our penetration test. We can also just delete the logs manually using the C: del %WINDR%* .log /a/s/q/f command. This command directs all logs using /a including subfolders /s, disables any queries so we don't get prompted, and /f forces this action. Whichever program you use, make sure to delete the executable file once the log files are removed so that the file isn't identified during a future forensic investigation. For Linux systems, we need to get access to the /var/log folder to find the log files. Once we have access to the log files, we can simply open them and remove all entries. The following screenshot shows an example of our Raspberry Pi's log folder: We can just delete the files using the remove command, rm, such as rm FILE.txt or delete the entire folder; however, this wouldn't be as stealthy as wiping existing files clean of your footprint. Another option is in Bash. We can simply type > /path/to/file to empty the contents of a file, without removing it necessarily. This approach has some stealth benefits. Kali Linux does not have a GUI-based text editor, so one easy-to-use tool that we can install is gedit. We'll use apt-get install gedit to download it. Once installed, we can find gedit under the application dropdown or just type gedit in the terminal window. As we can see from the following screenshot, it looks like many common text file editors. Let's click on File and select files from the /var/log folder to modify them. We also need to erase the command history since the Bash shell saves the last 500 commands. This forensic evidence can be accessed by typing the more ~/.bash_history command. The following screenshot shows the first of the hundreds of commands we recently ran on my Raspberry Pi: To verify the number of stored commands in the history file, we can type the echo $HISTSIZE command. To erase this history, let's type export HISTSIZE=0. From this point, the shell will not store any command history, that is, if we press the up arrow key, it will not show the last command. These commands can also be placed in a .bashrc file on Linux hosts. The following screenshot shows that we have verified if our last 500 commands are stored. It also shows what happens after we erase them: It is a best practice to set this command prior to using any commands on a compromised system, so that nothing is stored upfront. You could log out and log back in once the export HISTSIZE=0 command is set to clear your history as well. You should also do this on your C&C server once you conclude your penetration test if you have any concerns of being investigated. A more aggressive and quicker way to remove our history file on a Linux system is to shred it with the shred –zu /root/.bash_history command. This command overwrites the history file with zeros and then deletes the log files. We can verify this using the less /root/.bash_history command to see if there is anything left in your history file, as shown in the following screenshot: Masking our network footprint Anonymity is a key ingredient when performing our attacks, unless we don't mind someone being able to trace us back to our location and giving up our position. Because of this, we need a way to hide or mask where we are coming from. This approach is perfect for a proxy or groups of proxies if we really want to make sure we don't leave a trail of breadcrumbs. When using a proxy, the source of an attack will look as though it is coming from the proxy instead of the real source. Layering multiple proxies can help provide an onion effect, in which each layer hides the other, and makes it very difficult to determine the real source during any forensic investigation. Proxies come in various types and flavors. There are websites devoted for hiding our source online, and with a quick Google search, we can see some of the most popular like hide.me, Hidestar, NewIPNow, ProxySite and even AnonyMouse. Following is a screenshot from NewIPNow website. Administrators of proxies can see all traffic as well as identify both the target and the victims that communicate through their proxy. It is highly recommended that you research any proxy prior to using it as some might use information captured without your permission. This includes providing forensic evidence to authorities or selling your sensitive information. Using ProxyChains Now, if web based proxies are not what we are looking for, we can use our Raspberry Pi as a proxy server utilizing the ProxyChains application. ProxyChains is very easy application to setup and start using. First, we need to install the application. This can be accomplished this by running the following command from the CLI: root@kali:~# apt-get install proxychains Once installed, we just need to edit the ProxyChains configuration located at /etc/proxychains.conf, and put in the proxy servers we would like to use: There are lots of options out there for finding public proxies. We should certainly use with some caution, as some proxies will use our data without our permission, so we'll be sure to do our research prior to using one. Once we have one picked out and have updated our proxychains.conf file, we can test it out. To use ProxyChains, we just need to follow the following syntax: proxychains <command you want tunneled and proxied> <opt args> Based on that syntax, to run a nmap scan, we would use the following command: root@kali:~# proxychains nmap 192.168.245.0/24 ProxyChains-3.1 (http://proxychains.sf.net) Starting Nmap 7.25BETA1 ( https://nmap.org ) Clearing the data off the Raspberry Pi Now that we have covered our tracks on the network side, as well as on the endpoint, all we have left is any of the equipment that we have left behind. This includes our Raspberry Pi. To reset our Raspberry Pi back to factory defaults, we can refer back to installing Kali Linux. For re-installing Kali or the NOOBS software. This will allow us to have clean image running once again. If we had cloned your golden image we could just re-image our Raspberry Pi with that image. If we don't have the option to re-image or reinstall your Raspberry Pi, we do have the option to just destroy the hardware. The most important piece to destroy would be the microSD card (see image following), as it contains everything that we have done on the Pi. But, we may want to consider destroying any of the interfaces that you may have used (USB WiFi, Ethernet or Bluetooth adapters), as any of those physical MAC addresses may have been recorded on the target network, and therefore could prove that device was there. If we had used our onboard interfaces, we may even need to destroy the Raspberry Pi itself. If the Raspberry Pi is in a location that we cannot get to reclaim it or to destroy it, our only option is to remotely corrupt it so that we can remove any clues of our attack on the target. To do this, we can use the rm command within Kali. The rm command is used to remove files and such from the operating systems. As a cool bonus, rm has some interesting flags that we can use to our advantage. These flags include the –r and the –f flag. The –r flag indicates to perform the operation recursively, so everything in that directory and preceding will be removed while the –f flag is to force the deletion without asking. So running the command rm –fr * from any directory will remove all contents within that directory and anything preceding that. Where this command gets interesting is if we run it from / a.k.a. the top of the directory structure. Since the command will remove everything in that directory and preceding, running it from the top level will remove all files and hence render that box unusable. As any data forensics person will tell us, that data is still there, just not being used by the operation system. So, we really need to overwrite that data. We can do this by using the dd command. We used dd back when we were setting up the Raspberry Pi. We could simply use the following to get the job done: dd if=/dev/urandom of=/dev/sda1 (where sda1 is your microSD card) In this command we are basically writing random characters to the microSD card. Alternatively, we could always just reformat the whole microSD card using the mkfs.ext4 command: mkfs.ext4 /dev/sda1 ( where sda1 is your microSD card ) That is all helpful, but what happens if we don't want to destroy the device until we absolutely need to – as if we want the ability to send over a remote destroy signal? Kali Linux now includes a LUKS Nuke patch with its install. LUKS allows for a unified key to get into the container, and when combined with Logical Volume Manager (LVM), can created an encrypted container that needs a password in order to start the boot process. With the Nuke option, if we specify the Nuke password on boot up instead of the normal passphrase, all the keys on the system are deleted and therefore rendering the data inaccessible. Here are some great links to how and do this, as well as some more details on how it works: https://www.kali.org/tutorials/nuke-kali-linux-luks/ http://www.zdnet.com/article/developers-mull-adding-data-nuke-to-kali-linux/ Summary In this article we sawreports themselves are what our customer sees as our product. It should come as no surprise that we should then take great care to ensure they are well organized, informative, accurate and most importantly, meet the customer's objectives. Resources for Article: Further resources on this subject: Penetration Testing [article] Wireless and Mobile Hacks [article] Building Your Application [article]
Read more
  • 0
  • 0
  • 22518

article-image-using-elm-and-jquery-together
Eduard Kyvenko
21 Dec 2016
6 min read
Save for later

Using Elm and jQuery Together

Eduard Kyvenko
21 Dec 2016
6 min read
jQuery as a library has proven itself as an invaluable asset in the process of development by leveraging the complexity of the DOM API from web programmer’s shoulders. For a very long time, jQuery Plugins were one of the best ways of writing modular and reusable code. In this post, I will explain how to integrate existing jQuery plugin into your Elm application. At first glance, this might be a bad idea, but bear with me; I will show you how you can benefit from this combination. The example relies on Create Elm App to simplify the setup for minimal implementation of the jQuery plugin integration. As Richard Feldman mentions in his talk from ReactiveConf 2016, you could, and actually should, take over the opportunity of integrating existing JavaScript assets into your Elm application. As an example, I will use Select2 library, which I have been using for a very long time. It is very useful for numerous use cases as native HTML5 implementation of <select> element is quite limited. The view You are expected to have some experience with Elm and understand application life cycle. The first essential rule to follow is that when establishing interoperation with any User Interface, components should be written in JavaScript. Do not ever mutate the DOM tree, which was produced by the Elm application. Introducing mutations to elements, defined in the view function, will most likely introduce a runtime error. As long as you follow this simple rule, you can enjoy the best parts of both worlds, having complex DOM interactions in jQuery, and strict state management in Elm. Hosting a DOM node for a root element of a jQuery plugin is safe, as long as you don’t use the Navigation package for client-side routing. In that case, you will have to define additional logic for detaching the plugin and initializing it back when the route changes. I’ll stick to the minimal example and just define a root node for a future Select2 container. view : Model -> Html Msg view model = div [] [ text (toString model) , div [ id "select2-container" ] [] ] You should be familiar with the concept of JavaScript Interop with Elm; we will use both the incoming and outgoing port to establish the communication with Select2. Modeling the problem The JavaScript world gives you way more options on how you might wire the integration, but don’t be distracted by the opportunities because there is an easy solution already. The second rule of robust interop is keeping the state management to Elm. The application should own the data for initializing the plugin and control its behavior to make it predictable. That way, you will have an extra level of security, almost with no effort. I have defined the options for the future Select2 UI element in my app using a Dictionary. This data structure is very convenient for rendering <select> node with <options> inside. options : Dict String String options = Dict.fromList [ ( "US", "United States" ) , ( "UK", "United Kingdom" ) , ( "UY", "Uruguay" ) , ( "UZ", "Uzbekistan" ) ] Upon initialization, the Elm app will send the data for future options through the port. port output : List ( String, String ) -> Cmd msg Let's take a quick look at the initialization process here as you might have a slightly different setup; however, the core idea will always follow the same routine. Include all of the libraries and application, and embed it into the DOM node. // Import jQuery and Select2. var $ = require('jquery'); var select2 = require('select2'); // Import Elm application and initialize it. var Elm = require('./Main.elm'); var root = document.getElementById('root'); var App = Elm.Main.embed(root); When this code is executed, your app is ready to send the data to outgoing ports. Port communication You can use the initial state for sending data to JavaScript right after the startup, which is probably the best time for initializing all user interface components. init : ( Model, Cmd Msg ) init = ( Model Nothing, output (Dict.toList options) ) To retrieve this data, you have to subscribe to the port in the JavaScript land and define some initialization logic for a jQuery plugin. When the output port will receive the data for rendering options, the application is ready for initialization of the plugin. The only implication is that we have to render the <select> element with jQuery or any other templating library. When all the required DOM nodes are rendered, the plugin can be initialized. I will use the change event on Select2 instance and notify the Elm application through the input port. To simplify the setup, we can trigger the change event right away so the state of the jQuery plugin instance and Elm application is synchronized. App.ports.output.subscribe(function (options) { var $selectContainer = $('#select2-container'); // Generate DOM tree with <select> and <option> inside and embed it into the root node. var select = $('<select>', { html: options.map(function (option) { return $('<option>', { value: option[ 0 ], text: option[ 1 ] }) }) }).appendTo($selectContainer); // Initialize Select2, when everything is ready. var select2 = $(select).select2(); // Setup change port subscription. select2.on('change', function (event) { App.ports.input.send(event.target.value); }); // Trigger the change for initial value. select2.trigger('change'); }); The JavaScript part is simplified as much as possible to demonstrate the main idea behind subscribing to outgoing port and sending data through an incoming port. You can have a look at the running Elm application with Select2. The source code is available at GitHub in halfzebra/elm-examples. Conclusion Elm can get along with pretty much any JavaScript framework, or a library if said framework or library is not mutating the DOM state of Elm application. The Elm Architecture will help you make Vanilla JavaScript components more reliable. Even though ports cannot transport any abstract data types, except built-in primitives, which is slightly limiting, Elm has high interop potential with most of the existing JavaScript technology. If done right, you can get the extreme diversity of functionality from a huge JavaScript community under the control of the most reliable state machine on the frontend! About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 4557

article-image-how-build-javascript-microservices-platform
Andrea Falzetti
20 Dec 2016
6 min read
Save for later

How to build a JavaScript Microservices platform

Andrea Falzetti
20 Dec 2016
6 min read
Microservices is one of the most popular topics in software development these days, as well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project. The challenge The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings! Technology stack For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications. And yes, we do like JavaScript! Node.js Microservices Toolbox There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have. I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it. Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes. It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh: ssh node@<IP> cd ./project/frontend-api git pull Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice: pm2 restart frontend-api Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did. Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path: api.example.com/chat-bot api.example.com/admin-backend api.example.com/frontend-api … This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs. Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network. What does a microservice look like, and how do they talk to each other? We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2: pm2 list pm2 status <microservice-pid> Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape. The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets. I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change. Conclusions In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful. About the author Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 13625

article-image-clean-your-code
Packt
19 Dec 2016
23 min read
Save for later

Clean Up Your Code

Packt
19 Dec 2016
23 min read
 In this article by Michele Bertoli, the author of the book React Design Patterns and Best Practices, we will learn to use JSX without any problems or unexpected behaviors, it is important to understand how it works under the hood and the reasons why it is a useful tool to build UIs. Our goal is to write clean and maintainable JSX code and to achieve that we have to know where it comes from, how it gets translated to JavaScript and which features it provides. In the first section, we will do a little step back but please bear with me because it is crucial to master the basics to apply the best practices. In this article, we will see: What is JSX and why we should use it What is Babel and how we can use it to write modern JavaScript code The main features of JSX and the differences between HTML and JSX The best practices to write JSX in an elegant and maintainable way (For more resources related to this topic, see here.) JSX Let's see how we can declare our elements inside our components. React gives us two ways to define our elements: the first one is by using JavaScript functions and the second one is by using JSX, an optional XML-like syntax. In the beginning, JSX is one of the main reasons why people fails to approach to React because looking at the examples on the homepage and seeing JavaScript mixed with HTML for the first time does not seem right to most of us. As soon as we get used to it, we realize that it is very convenient exactly because it is similar to HTML and it looks very familiar to anyone who already created User Interfaces on the web. The opening and closing tags, make it easier to represent nested trees of elements, something that would have been unreadable and hard to maintain using plain JavaScript. Babel In order to use JSX (and es2015) in our code, we have to install Babel. First of all, it is important to understand clearly the problems it can solve for us and why we need to add a step in our process. The reason is that we want to use features of the language that have not been implemented yet in the browser, our target environment. Those advanced features make our code more clean for the developers but the browser cannot understand and execute it. So the solution is to write our scripts in JSX and es2015 and when we are ready to ship, we compile the sources into es5, the standard specification that is implemented in the major browsers today. Babel is a popular JavaScript compiler widely adopted within the React community: It can compile es2015 code into es5 JavaScript as well as compile JSX into JavaScript functions. The process is called transpilation, because it compiles the source into a new source rather than into an executable. Using it is pretty straightforward, we just install it: npm install --global babel-cli If you do not like to install it globally (developers usually tend to avoid it), you can install Babel locally to a project and run it through a npm script but for the purpose of this article a global instance is fine. When the installation is completed we can run the following command to compile our JavaScript files: babel source.js -o output.js One of the reasons why Babel is so powerful is because it is highly configurable. Babel is just a tool to transpile a source file into an output file but to apply some transformations we need to configure it. Luckily, there are some very useful presets of configurations which we can easily install and use: npm install --global babel-preset-es2015 babel-preset-react Once the installation is done, we create a configuration file called .babelrc and put the following lines into it to tell Babel to use those presets: { "presets": [ "es2015", "React" ] } From this point on we can write es2015 and JSX in our source files and execute the output files in the browser. Hello, World! Now that our environment has been set up to support JSX, we can dive into the most basic example: generating a div element. This is how you would create a div with React'screateElementfunction: React.createElement('div') React has some shortcut methods for DOM elements and the following line is equivalent to the one above: React.DOM.div() This is the JSX for creating a div element: <div /> It looks identical to the way we always used to create the markup of our HTML pages. The big difference is that we are writing the markup inside a .js file but it is important to notice that JSX is only a syntactic sugar and it gets transpiled into the JavaScript before being executed in the browser. In fact, our <div /> is translated into React.createElement('div') when we run Babel and that is something we should always keep in mind when we write our templates. DOM elements and React components With JSX we can obviously create both HTML elements and React components, the only difference is if they start with a capital letter or not. So for example to render an HTML button we use <button />, while to render our Button components we use <Button />. The first button gets transpiled into: React.createElement('button') While the second one into: React.createElement(Button) The difference here is that in the first call we are passing the type of the DOM element as a string while in the second one we are passing the component itself, which means that it should exist in the scope to work. As you may have noticed, JSX supports self-closing tags which are pretty good to keep the code terse and they do not require us to repeat unnecessary tags. Props JSX is very convenient when your DOM elements or React components have props, in fact following XML is pretty easy to set attributes on elements: <imgsrc="https://facebook.github.io/react/img/logo.svg" alt="React.js" /> The equivalent in JavaScript would be: React.createElement("img", { src: "https://facebook.github.io/react/img/logo.svg", alt: "React.js" }); Which is way less readable and even with only a couple of attributes it starts getting hard to be read without a bit of reasoning. Children JSX allows you to define children to describe the tree of elements and compose complex UIs. A basic example could be a link with a text inside it: <a href="https://facebook.github.io/react/">Click me!</a> Which would be transpiled into: React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ); Our link can be enclosed inside a div for some layout requirements and the JSX snippet to achieve that is the following: <div> <a href="https://facebook.github.io/react/">Click me!</a> </div> With the JSX equivalent being: React.createElement( "div", null, React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ) ); It becomes now clear how the XML-like syntax of JSX makes everything more readable and maintainable but it is always important to know what is the JavaScript parallel of our JSX to take control over the creation of elements. The good part is that we are not limited to have elements as children of elements but we can use JavaScript expressions like functions or variables. For doing that we just have to put the expression inside curly braces: <div> Hello, {variable}. I'm a {function()}. </div> The same applies to non-string attributes: <a href={this.makeHref()}>Click me!</a> Differences with HTML So far we have seen how the JSX is similar to HTML, let's now see the little differences between them and the reasons why they exist. Attributes We always have to keep in mind that JSX is not a standard language and it gets transpiled into JavaScript and because of that, some attributes cannot be used. For example instead of class we have to use className and instead of for we have to use htmlFor: <label className="awesome-label"htmlFor="name" /> The reason is that class and for are reserved word in JavaScript. Style A pretty significant difference is the way the style attribute works.The style attribute does not accept a CSS string as the HTML parallel does, but it expects a JS Object where the style names are camelCased. <div style={{ backgroundColor: 'red' }} /> Root One important difference with HTML worth mentioning is that since JSX elements get translated into JavaScript functions and you cannot return two functions in JavaScript, whenever you have multiple elements at the same level you are forced to wrap them into a parent. Let's see a simple example: <div /> <div /> Gives us the following error: Adjacent JSX elements must be wrapped in an enclosing tag While this: <div> <div /> <div /> </div> It is pretty annoying having to add unnecessary divtags just for making JSX work but the React developers are trying to find a solution: https://github.com/reactjs/core-notes/blob/master/2016-07/july-07.md Spaces There's one thing that could be a little bit tricky at the beginning and again it regards the fact that we should always have in mind that JSX is not HTML, even if it has an XML-like syntax. JSX, in fact, handles the spaces between text and elements differently from HTML in a way that's counter-intuitive. Consider the following snippet: <div> <span>foo</span> bar <span>baz</span> </div> In the browser, which interprets HTML, this code would give you foo bar baz, which is exactly what we expect it to be. In JSX instead, the same code would be rendered as foobarbaz and that is because the three nested lines get transpiled as individual children of the div element, without taking in account the spaces. A common solution is to put a space explicitly between the elements: <div> <span>foo</span> {''} bar {''} <span>baz</span> </div> As you may have noticed, we are using an empty string wrapped inside a JavaScript expression to force the compiler to apply the space between the elements. Boolean Attributes A couple of more things worth mentioning before starting for real regard the way you define Boolean attributes in JSX. If you set an attribute without a value, JSX assumes that its value is true, following the same behavior of the HTML disabled attribute, for example. That means that if we want to set an attribute to false we have to declare it explicitly to false: <button disabled /> React.createElement("button", { disabled: true }); And: <button disabled={false} /> React.createElement("button", { disabled: false }); This can be confusing in the beginning because we may think that omitting an attribute would mean false but it is not like that: with React we should always be explicit to avoid confusion. Spread attributes An important feature is the spread attributes operator, which comes from the Rest/Spread Properties for ECMAScript proposal and it is very convenient whenever we want to pass all the attributes of a JavaScript object to an element. A common practice that leads to fewer bugs is not to pass entire JavaScript objects down to children by reference but using their primitive values which can be easily validated making components more robust and error proof. Let's see how it works: const foo = { bar: 'baz' } return <div {...foo} /> That gets transpiled into this: var foo = { bar: 'baz' }; return React.createElement('div', foo); JavaScript templating Last but not least, we started from the point that one of the advantages of moving the templates inside our components instead of using an external template library is that we can use the full power of JavaScript, so let's start looking at what it means. The spread attributes is obviously an example of that and another common one is that JavaScript expressions can be used as attributes values by wrapping them into curly braces: <button disabled={errors.length} /> Now that we know how JSX works and we master it, we are ready to see how to use it in the right way following some useful conventions and techniques. Common Patterns Multi-line Let's start with a very simple one: as we said, on the main reasons why we should prefer JSX over React'screateClass is because of its XML-like syntax and the way balanced opening/closing tags are perfect to represent a tree of nodes. Therefore, we should try to use it in the right way and get the most out of it. One example is that, whenever we have nested elements, we should always go multi-line: <div> <Header /> <div> <Main content={...} /> </div> </div> Instead of: <div><Header /><div><Main content={...} /></div></div> Unless the children are not elements, such as text or variables. In that case it can make sense to remain on the same line and avoid adding noise to the markup, like: <div> <Alert>{message}</Alert> <Button>Close</Button> </div> Always remember to wrap your elements inside parenthesis when you write them in multiple lines. In fact, JSX always gets replaced by functions and functions written in a new line can give you an unexpected result. Suppose for example that you are returning JSX from your render method, which is how you create UIs in React. The following example works fine because the div is in the same line of the return: return <div /> While this is not right: return <div /> Because you would have: return; React.createElement("div", null); That is why you have to wrap the statement into parenthesis: return ( <div /> ) Multi-properties A common problem in writing JSX comes when an element has multiples attributes. One solution would be to write all the attributes on the same line but this would lead to very long lines which we do not want in our code (see in the next section how to enforce coding style guides). A common solution is to write each attribute on a new line with one level of indentation and then putting the closing bracket aligned with the opening tag: <button foo="bar" veryLongPropertyName="baz" onSomething={this.handleSomething} /> Conditionals Things get more interesting when we start working with conditionals, for example if we want to render some components only when some conditions are matched. The fact that we can use JavaScript is obviously a plus but there are many different ways to express conditions in JSX and it is important to understand the benefits and the problems of each one of those to write code that is readable and maintainable at the same time. Suppose we want to show a logout button only if the user is currently logged in into our application. A simple snippet to start with is the following: let button if (isLoggedIn) { button = <LogoutButton /> } return <div>{button}</div> It works but it is not very readable, especially if there are multiple components and multiple conditions. What we can do in JSX is using an inline condition: <div> {isLoggedIn&&<LoginButton />} </div> This works because if the condition is false, nothing gets rendered but if the condition is true the createElement function of the Loginbutton gets called and the element is returned to compose the resulting tree. If the condition has an alternative, the classic if…else statement, and we want for example to show a logout button if the user is logged in and a login button otherwise, we can either use JavaScript's if…else: let button if (isLoggedIn) { button = <LogoutButton /> } else { button = <LoginButton /> } return <div>{button}</div> Alternatively, better, using a ternary condition, which makes our code more compact: <div> {isLoggedIn ? <LogoutButton /> : <LoginButton />} </div> You can find the ternary condition used in popular repositories like the Redux real world example (https://github.com/reactjs/redux/blob/master/examples/real-world/src/components/List.js) where the ternary is used to show a loading label if the component is fetching the data or "load more" inside a button according to the value of the isFetching variable: <button [...]> {isFetching ? 'Loading...' : 'Load More'} </button> Let's now see what is the best solution when things get more complicated and, for example, we have to check more than one variable to determine if render a component or not: <div> {dataIsReady&& (isAdmin || userHasPermissions) &&<SecretData />} </div> In this case is clear that using the inline condition is a good solution but the readability is strongly impacted so what we can do instead is creating a helper function inside our component and use it in JSX to verify the condition: canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData() &&<SecretData />} </div> As you can see, this change makes the code more readable and the condition more explicit. Looking into this code in six month time you will still find it clear just by reading the name of the function. If we do not like using functions you can use object's getters which make the code more elegant. For example, instead of declaring a function we define a getter: get canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData&&<SecretData />} </div> The same applies to computed properties: suppose you have two single properties for currency and value. Instead of creating the price string inside you render method you can create a class function for that: getPrice() { return `${this.props.currency}${this.props.value}` } <div>{this.getPrice()}</div> Which is better because it is isolated and you can easily test it in case it contains logic. Alternatively going a step further and, as we have just seen, use getters: get price() { return `${this.props.currency}${this.props.value}` } <div>{this.price}</div> Going back to conditional statements, there are other solutions that require using external dependencies. A good practice is to avoid external dependencies as much as we can to keep our bundle smaller but it may be worth it in this particular case because improving the readability of our templates is a big win. The first solution is renderIf which we can install with: npm install --save render-if And easily use in our projects like this: const { dataIsReady, isAdmin, userHasPermissions } = this.props constcanShowSecretData = renderIf(dataIsReady&& (isAdmin || userHasPermissions)) <div> {canShowSecretData(<SecretData />)} </div> We wrap our conditions inside the renderIf function. The utility function that gets returned can be used as a function that receives the JSX markup to be shown when the condition is true. One goal that we should always keep in mind is never to add too much logic inside our components. Some of them obviously will require a bit of it but we should try to keep them as simple and dumb as possible in a way that we can spot and fix error easily. At least, we should try to keep the renderIf method as clean as possible and for doing that we could use another utility library called React Only If which let us write our components as if the condition is always true by setting the conditional function using a higher-order component. To use the library we just need to install it: npm install --save react-only-if Once it is installed, we can use it in our apps in the following way: constSecretDataOnlyIf = onlyIf( SecretData, ({ dataIsReady, isAdmin, userHasPermissions }) => { return dataIsReady&& (isAdmin || userHasPermissions) } ) <div> <SecretDataOnlyIf dataIsReady={...} isAdmin={...} userHasPermissions={...} /> </div>  As you can see here there is no logic at all inside the component itself. We pass the condition as the second parameter of the onlyIf function when the condition is matched, the component gets rendered. The function that is used to validate the condition receives the props, the state, and the context of the component. In this way we avoid polluting our component with conditionals so that it is easier to understand and reason about. Loops A very common operation in UI development is displaying lists of items. When it comes to showing lists we realize that using JavaScript as a template language is a very good idea. If we write a function that returns an array inside our JSX template, each element of the array gets compiled into an element. As we have seen before we can use any JavaScript expressions inside curly braces and the more obvious way to generate an array of elements, given an array of objects is using map. Let's dive into a real-world example, suppose you have a list of users, each one with a name property attached to it. To create an unordered list to show the users you can do: <ul> {users.map(user =><li>{user.name}</li>)} </ul> This snippet is in incredibly simple and incredibly powerful at the same time, where the power of the HTML and the JavaScript converge. Control Statements Conditional and loops are very common operations in UI templates and you may feel wrong using the JavaScript ternary or the map function to do that. JSX has been built in a way that it only abstract the creation of the elements leaving the logic parts to real JavaScript which is great but sometimes the code could become less clear. In general, we aim to remove all the logic from our components and especially from our render method but sometimes we have to show and hide elements according to the state of the application and very often we have to loop through collections and arrays. If you feel that using JSX for that kind of operations would make your code more readable there is a Babel plugin for that: jsx-control-statements. It follows the same philosophy of JSX and it does not add any real functionality to the language, it is just a syntactic sugar that gets compiled into JavaScript. Let's see how it works. First of all, we have to install it: npm install --save jsx-control-statements Once it is installed we have to add it to the list of our babel plugins in our .babelrc file: "plugins": ["jsx-control-statements"] From now on we can use the syntax provided by the plugin and Babel will transpile it together with the common JSX syntax. A conditional statement written using the plugin looks like the following snippet: <If condition={this.canShowSecretData}> <SecretData /> </If> Which get transpiled into a ternary expression: {canShowSecretData ? <SecretData /> : null} The If component is great but if for some reasons you have nested conditions in your render method it can easily become messy and hard to follow. Here is where the Choose component comes to help: <Choose> <When condition={...}> <span>if</span> </When> <When condition={...}> <span>else if</span> </When> <Otherwise> <span>else</span> </Otherwise> </Choose>   Please notice that the code above gets transpiled into multiple ternaries. Last but not least there is a "component" (always remember that we are not talking about real components but just a syntactic sugar) to manage the loops which is very convenient as well. <ul> <For each="user" of={this.props.users}> <li>{user.name}</li> </For> </ul> The code above gets transpiled into a map function, no magic in there. If you are used to using linters, you might wonder how the linter is not complaining about that code. In fact, the variable item doesn't exist before the transpilation nor it is wrapped into a function. To avoid those linting errors there's another plugin to install: eslint-plugin-jsx-control-statements. If you did not understand the previous sentence don't worry: in the next section we will talk about linting. Sub-render It is worth stressing that we always want to keep our components very small and our render methods very clean and simple. However, that is not an easy goal, especially when you are creating an application iteratively and in the first iteration you are not sure exactly how to split the components into smaller ones. So, what should we be doing when the render method becomes big to keep it maintainable? One solution is splitting it into smaller functions in a way that let us keeping all the logic in the same component. Let's see an example: renderUserMenu() { // JSX for user menu } renderAdminMenu() { // JSX for admin menu } render() { return ( <div> <h1>Welcome back!</h1> {this.userExists&&this.renderUserMenu()} {this.userIsAdmin&&this.renderAdminMenu()} </div> ) }  This is not always considered a best practice because it seems more obvious to split the component into smaller ones but sometimes it helps just to keep the render method cleaner. For example in the Redux Real World examples a sub-render method is used to render the load more button. Now that we are JSX power user it is time to move on and see how to follow a style guide within our code to make it consistent. Summary In this article we deeply understood how JSX works and how to use it in the right way in our components. We started from the basics of the syntax to create a solid knowledge that will let us mastering JSX and its features. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Create Your First React Element [article] Getting Started [article]
Read more
  • 0
  • 0
  • 40753

article-image-random-value-generators-elm
Eduard Kyvenko
19 Dec 2016
5 min read
Save for later

Random Value Generators in Elm

Eduard Kyvenko
19 Dec 2016
5 min read
Purely functional nature of Elm leads to certain implications when used for generating random values. On the other hand, it opens up a completely new dimension for producing values of any desired shape, which is extremely useful in some cases. This article covers the core concepts for working with Random module. JavaScript offers Math.random as a way of producing random numbers; it does not expect a seed unlike traditional Pseudorandom number generator. Even though Elm is compiled to JavaScript, it does not rely on the native implementation for random numbers generation. It gives you more control by offering an API for both producing random values without explicitly specifying the seed and having the option to specify the seed explicitly and preserve its state. Both ways have tradeoffs and should be used in different situations. Random values without a Seed Before digging deeper I recommend that you look into the official Elm Guide Effects / Random, where you will find the most basic example of Random.generate. It is the easiest way to put your hands on random values. There are some significant tradeoffs you should be aware of. It relies on Time.now behind the scenes, which means you cannot guarantee efficient randomness if you run this command multiple times consecutively. In other words, there is a risk of getting the same value from running Random.generate multiple times within a short period of time. A good use case for this kind of command would be generating the seed for future, more efficient, and secure random values. I have written a little seed generator, which can be used for providing a seed for future Random.step calls: seedGenerator : Generator Seed seedGenerator = Random.int Random.minInt Random.maxInt |> Random.map (Random.initialSeed) Current time serves as a seed for Random.generate, and as you might know already, retrieving current time from the JavaScript is a side effect. Every value will arrive with a message. I will go ahead and define it; the generator will return value of the Seed type: type Msg = Update Seed init = ( { seed = Nothing } -- Initial command to create independent Seed. , Random.generate Update seedGenerator ) Storing the seed as a Maybe value makes a lot of sense since it is not present in the model at the very beginning. The initial application state will execute the generator and produce a message with a new seed, which will be accessible inside the update function: update msg model = case msg of Update seed -> -- Save newly created Seed into state. ( { model | seed = Just seed }, Cmd.none ) This concludes the initial setup for using random value generators with a Seed. As I have mentioned already, Random.generate is a not statistically reliable source of random values, therefore you should avoid relying on it too much in situations when you need multiple random values at the time. Random values with a Seed Using Random.step might be a little hard at the start. The type definition annotation for this function suggests that you will get a tuple with your newly generated value and the next seed state for future steps: Generator a -> Seed -> (a, Seed) This example application will put every new random value on a stack and display that in the DOM: I will extend the model with an additional key for saving random integers: type alias Model = { seed : Maybe Seed , stack : List Int } In the new handler for putting random values on the stack, I heavily rely on Maybe.map. It is very convenient when you want to make an impossible state impossible. In this case, I don’t want to generate any new values if the seed is missing for some reason: update msg model = case msg of Update seed -> -- Preserve newly initialized Seed state. ( { model | seed = Just seed }, Cmd.none ) PutRandomNumber -> let {- In case if seed was present, new model will contain the new value and a new state for the seed. -} newModel : Model newModel = model.seed |> Maybe.map (Random.step (Random.int 0 10)) |> Maybe.map (( number, seed ) -> { model | seed = Just seed , stack = number :: model.stack } ) |> Maybe.withDefault model in ( newModel , Cmd.none ) In short, the new branch will generate a random integer and a new seed and update the model with those new values if the seed was present. This concludes the basic example of Random.step usage, but there’s a lot more to learn. Generators You can get pretty far with Generator and define something more complex than just an integer. Let’s define a generator for producing random stats for calculating BMI: type alias Model = { seed : Maybe Seed , stack : List BMI } type alias BMI = { weight : Float , height : Float , bmi : Float } valueGenerator : Generator BMI valueGenerator = Random.map2 (w h -> BMI w h (w / (h * h))) (Random.float 60 150) (Random.float 0.6 1.2) Random.map allows using values from a passed generator and applying a function to the results, which is very convenient for making simple calculations, such as BMI: You can raise the bar with Random.andThen and produce generators based on random values. This is super useful for making combinations without repeats. Check the source of this example application on GitHub elm-examples/random-initial-seed Conclusion Elm offers a powerful abstraction for declarative definition of random value generators. Building values of any complex shape becomes quite simple by combining the power of Random.map. However, it might be a little overwhelming after JavaScript or any other imperative language. Give it a chance, maybe you will need a reliable generator for custom values in your next project! About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 3477

article-image-storing-records-and-interface-customization
Packt
16 Dec 2016
18 min read
Save for later

Storing Records and Interface customization

Packt
16 Dec 2016
18 min read
 In this article by Paul Goody, the author of the book Salesforce CRM - The Definitive Admin Handbook - Fourth Edition, we will describe in detail the Salesforce CRM record storage features and user interface that can be customized, such as objects, fields, and page layouts. In addition, we will see an overview of the relationship that exists between the profile and these customizable features that the profile controls. This article looks at the methods to configure and tailor the application to suit the way your company information can be best represented within the Salesforce CRM application. We will look at the mechanisms to store data in Salesforce and the concepts of objects and fields. The features that allow this data to be grouped and arranged presented within the application are then considered by looking at apps, tabs, page layouts, and record types. Finally, we will take a look at some of the features that allow views of data to be presented and customized by looking in detail at related types, related lists, and list views. Finally, you will be presented with a number of questions about the key features of Salesforce CRM administration in the area of Standard and Custom Objects, which are covered in this article. We will cover the following topics in this article: Objects Fields Object relationships Apps Tabs Renaming labels for standard tabs, standard objects, and standard fields Creating custom objects Object limits Creating custom object relationships Creating custom fields Dependent picklists Building relationship fields Lookup relationship options Master detail relationship options Lookup filters Building formulas Basic formula Advanced formula Building formulas--best practices Building formula text and compiled character size limits Custom field governance Page layouts Freed-based page layouts Record types Related lists (For more resources related to this topic, see here.) The relationship between a profile and the features that it controls The following diagram describes the relationship that exists between a profile and the features that it controls: The profile is used to: Control access to the type of license specified for the user and any login hours or IP address restrictions that are set. Control access to objects and records using the role and sharing model. If the appropriate object-level permission is not set on the user's profile, then the user will be unable to gain access to the records of that object type in the application. In this article, we will look at the configurable elements that are set in conjunction with a profile. These are used to control the structure and the user interface for the Salesforce CRM application. Objects Objects are a key element in Salesforce CRM as they provide a structure to store data and are incorporated in the interface, allowing users to interact with the data. Similar in nature to a database table, objects have the following properties: Fields, which are similar in concept to a database column Records, which are similar in concept to a database row Relationships with other objects Optional tabs, which are user-interface components to display the object data Standard objects Salesforce provides standard objects in the application when you sign up; these include Account, Contact, Opportunity, and so on. These are the tables that contain the data records in any standard tab, such as Accounts, Contacts, and Opportunities. In addition to the standard objects, you can create custom objects and tabs. Custom objects Custom objects are the tables you create to store your data. You can create a custom object to store data specific to your organization. Once you have the custom objects, and have created records for these objects, you can also create reports and dashboards based on the record data in your custom object. Fields Fields in Salesforce are similar in concept to a database column: they store the data for the object records. An object record is analogous to a row in a database table. Standard fields Standard fields are predefined fields that are included as standard within the Salesforce CRM application. Standard fields cannot be deleted but non-required standard fields can be removed from page layouts, whenever necessary. With standard fields, you can customize visual elements that are associated to the field, such as field labels and field-level help, as well as certain data definitions, such as picklist values, the formatting of auto-number fields (which are used as unique identifiers for the records), and setting of field history tracking. Some aspects, however, such as the field name, cannot be customized, and some standard fields, such as Opportunity Probability, do not allow the changing of the field label. Custom fields Custom fields are unique to your business needs and can not only be added and amended, but also deleted. Creating custom fields allow you to store the information that is necessary for your organization. Both standard and custom fields can be customized to include custom help text to help users understand how to use the field as shown in following screenshot Object relationships Object relationships can be set on both standard and custom objects and are used to define how records in one object relate to records in another object. Accounts, for example, can have a one-to-many relationship with opportunities; these relationships are presented in the application as related lists. Apps An app in Salesforce is a container for all the objects, tabs, processes, and services associated with a business function. There are standard and custom apps that are accessed using the App menu located at the top-right corner of the Salesforce page, as shown in the following screenshot: When users select an app from the App menu, their screen changes to present the objects associated with that app. For example, when switching from an app that contains the Campaign tab to one that does not, the Campaign tab no longer appears. This feature is applied to both standard and custom apps. Standard apps Salesforce provides standard apps such as Call Center, Community, Content, MarketingSales, Salesforce Chatter, and Site.com. Custom apps A custom app can optionally include a custom logo. Both standard and custom apps consist of a name, a description, and an ordered list of tabs. Subtab apps A subtab app is used to specify the tabs that appear on the Chatter profile page. Subtab apps can include both default and custom tabs that you can set. Tabs A tab is a user-interface element that, when clicked, displays the record data on a page specific to that object. Hiding and showing tabs To customize your personal tab settings, navigate to Setup | My Personal Settings | Change My Display | Customize My Tabs. Now, choose the tabs that will display in each of your apps by moving the tab name between the Available Tabs and the Selected Tabs sections and click on Save. The following screenshot shows the section of tabs for the Sales app: To customize the tab settings of your users, navigate to Setup | Manage Users | Profiles. Now, select a profile and click on Edit. Scroll down to the Tab Settings section of the page, as shown in the following screenshot: Standard tabs Salesforce provides tabs for each of the standard objects that are provided in the application when you sign up. For example, there are standard tabs for Accounts, Contacts, Opportunities, and so on: Visibility of the tab depends on the setting on the Tab Display setting for the app. Custom tabs You can create three different types of custom tabs: Custom Object Tabs, Web Tabs, and Visualforce Tabs. Custom Object Tabs allow you to create, read, update, and delete the data records in your custom objects. Web Tabs display any web URL in a tab within your Salesforce application. Visualforce Tabs display custom user-interface pages created using Visualforce. Creating custom tabs: The text displayed on the Custom tab is set using the Plural Label of the custom object, which is entered when creating the custom object. If the tab text needs to be changed, this can be done by changing the Plural Label stored in the custom object. Salesforce.com recommends selecting the Append tab to a user’s existing personal customization checkbox. This benefits your users as they will automatically be presented with the new tab and can immediately access the corresponding functionality without having to first customize their personal settings themselves. It is recommended that you do not show tabs by setting appropriate permissions so that the users in your organization cannot see any of your changes until you are ready to make them available. You can create up to 25 custom tabs in the Enterprise Edition, and as many as you require in the Unlimited and Performance Editions. To create custom tabs for a custom object, navigate to Setup | Create | Tabs. Now, select the appropriate tab type and/or object from the available selections, as shown in the following screenshot: Renaming labels for standard tabs, standard objects, and standard fields Labels generally reflect the text that is displayed and presented to your users in the user interface and in reports within the Salesforce application. You can change the display labels of standard tabs, objects, fields, and other related user interface labels so they can reflect your company's terminology and business requirements better. For example, the Accounts tab and object can be changed to Clients; similarly, Opportunities to Deals, and Leads to Prospects. Once changed, the new label is displayed on all user pages. The Setup Pages and Setup Menu sections cannot be modified and do not include any renamed labels and continue. Here, the standard tab, object, and field reference continues to use the default, original labels. Also, the standard report names and views continue to use the default labels and are not renamed. To change standard tab, objects, and field labels, navigate to Setup | Customize | Tabs Names and Labels | Rename Tabs and Labels. Now, select a language, and then click on Edit to modify the tab names and standard field labels, as shown in the following screenshot: Click on Edit to select the tab that you wish to rename. Although the screen indicates that this is a change for the tab's name, this selection will also allow you to change the labels for the object and fields, in addition to the tab name. To change field labels, click through to step 2. Enter the new field labels. Here, we will rename Accounts tab to Clients. Enter the Singular and Plural names and then click on Next, as shown in the following screenshot: Only the following standard tabs and objects can be renamed: Accounts, Activities, Articles, Assets, Campaigns, Cases, Contacts, Contracts, Documents, Events, Ideas, Leads, Libraries, Opportunities, Opportunity Products, Partners, Price Books, Products, Quote Line Items, Quotes, Solutions, Tasks. Tabs such as HomeChatter, Forecasts, Reports, and Dashboards cannot be renamed. The following screenshot shows Standard Field available: Salesforce looks for the occurrence of the Account label and displays an auto-populated screen showing where the Account text will be replaced with Client. This auto-population of text is carried out for the standard tab, the standard object, and the standard fields. Review the replaced text, amend as necessary, and then click on Save, as shown in the following screenshot: After renaming, the new labels are automatically displayed on the tab, in reports, in dashboards, and so on. Some standard fields, such as Created By and Last Modified, are prevented from being renamed because they are audit fields that are used to track system information. You will, however, need to carry out the following additional steps to ensure the consistent renaming throughout the system as these may need manual updates: Check all list view names as they do not automatically update and will continue to show the original object name until you change them manually. Review standard report names and descriptions for any object that you have renamed. Check the titles and descriptions of any e-mail templates that contain the original object or field name, and update them as necessary. Review any other items that you have customized with the standard object or field name. For example, custom fields, page layouts, and record types may include the original tab or field name text that is no longer relevant. If you have renamed tabs, objects, or fields, you can also replace the Salesforce online help with a different URL. Your users can view this replaced URL whenever they click on any context-sensitive help link on an end-user page or from within their personal setup options. Creating custom objects Custom objects are database tables that allow you to store data specific to your organization on salesforce.com. You can use custom objects to extend Salesforce functionality or to build new application functionality. You can create up to 200 custom objects in the Enterprise Edition and 2000 in the Unlimited Edition. Once you have created a custom object, you can create a custom tab, custom-related lists, reports, and dashboards for users to interact with the custom object data. To create a custom object, navigate to Setup | Create | Objects. Now click on New Custom Object, or click on Edit to modify an existing custom object. The following screenshot shows the resulting screen:   On the Custom Object Definition Edit page, you can enter the following: Label: This is the visible name that is displayed for the object within the Salesforce CRM user interface and shown on pages, views, and reports, for example. Plural Label: This is the plural name specified for the object, which is used within the application in places such as reports and on tabs (if you create a tab for the object). Gender (language dependent): This field appears if your organization-wide default language expects gender. This is used for organizations where the default language settings are, for example, Spanish, French, Italian, and German, among many others. Your personal language preference setting does not affect whether the field appears or not. For example, if your organization's default language is English, but your personal language is French, you will not be prompted for gender when creating a custom object. Starts with a vowel sound: Use of this setting depends on your organization's default language and is a linguistic check to allow you to specify whether your label is to be preceded by an instead of a; for example, resulting in reference to the object as an Order instead of a Order. Object Name: This is a unique name used to refer to the object. Here, the Object Name field must be unique and can only contain underscores and alphanumeric characters. It must also begin with a letter, not contain spaces or two consecutive underscores, and not end with an underscore. Description: This is an optional description of the object. A meaningful description will help you explain the purpose of your custom objects when you are viewing them in a list. Context-Sensitive Help Setting: This defines what information is displayed when your users click on the Help for this Page context-sensitive help link from the custom object record home (overview), edit, and detail pages, as well as list views and related lists. The Help & Training link at the top of any page is not affected by this setting; it always opens Salesforce Help & Training window. Record Name: This is the name that is used in areas such as page layouts, search results, key lists, and related lists, as shown next. Data Type: This sets the type of field for the record name. Here, the data type can be either text or auto-number. If the data type is set to be Text, then when a record is created, users must enter a text value, which does not need to be unique. If the data type is set to be Auto Number, it becomes a read-only field, whereby new records are automatically assigned a unique number, as shown in the following screenshot: Display Format: This option, as shown in the preceding example, only appears when the Data Type field is set to Auto Number. It allows you to specify the structure and appearance of the Auto Number field. For example, {YYYY}{MM}-{000} is a display format that produces a four-digit year and a two-digit month prefix to a number with leading zeros padded to three digits. Example data output would include: 201203-001, 201203-066, 201203-999, 201203-1234. It is worth noting that although you can specify the number to be three digits, if the number of records created becomes over 999, the record will still be saved but the automatically incremented number becomes 1000, 1001, and so on. Starting Number: As described, Auto Number fields in Salesforce CRM are automatically incremented for each new record. Here, you must enter the starting number for the incremental count, which does not have to be set to start from one. Allow Reports: This setting is required if you want to include the record data from the custom object in any report or dashboard analytics. When a custom object has a relationship field associating it to a standard object, a new Report Type may appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object by selecting the standard object for the report type category instead of the custom object. Such relationships can be either a lookup or a master-detail. A new Report Type is created in the standard report category if the custom object is either the lookup object on the standard object or the custom object has a master-detail relationship with the standard object. Lookup relationships create a relationship between two records so that you can associate them with each other. Moreover, the master-detail relationship fields created are described in more detail later in this section. relationship between records where the master record controls certain behaviors of the detail record such as record deletion and security. When the custom object has a master-detail relationship with a standard object or is a lookup object on a standard object, a new report type will appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object, which is done by selecting the standard object for the report type category instead of the custom object. Allow Activities: This allows users to include tasks and events related to the custom object records, which appear as a related list on the custom object page. Track Field History: This enables the tracking of data-field changes on the custom object records, such as who changed the value of a field and when it was changed. Fields history tracking also stores the value of the field before and after the fields edit. This feature is useful for auditing and data-quality measurement, and is also available within the reporting tools. The field history data is retained for up to 18 months, and you can set field history tracking for a maximum of 20 fields for Enterprise, Unlimited, and Performance Editions. Allow in Chatter Groups: This setting allows your users to add records of this custom object type to Chatter groups. When enabled, records of this object type that are created using the group publisher are associated with the group, and also appear in the group record list. When disabled, records of this object type that are created using the group publisher are not associated with the group. Deployment Status: This indicates whether the custom object is now visible and available for use by other users. This is useful as you can easily set the status to In Development until you are happy for users to start working with the new object. Add Notes & Attachments: This setting allows your users to record notes and attach files to the custom object records. When this is specified, a related list with the New Note and Attach File buttons automatically appears on the custom object record page where your users can enter notes and attach documents. The Add Notes & Attachments option is only available when you create a new object. Launch the New Custom Tab Wizard: This starts the custom tab wizard after you save the custom object. The New Custom Tab Wizard option is only available when you create a new object. If you do not select Launch the New Custom Tab Wizard, you will not be able to create a tab in this step; however, you can create the tab later, as described in the section Custom tabs covered earlier in this article. When creating a custom object, a custom tab is not automatically created. Summary This article, thus, describes in detail the Salesforce CRM record storage features and user interface that can be customized, the mechanism to store data in Salesforce CRM, the relationship that exists between the profile, and the customizable features that the profile controls. Resources for Article: Further resources on this subject: Introducing Dynamics CRM [article] Understanding CRM Extendibility Architecture [article] Getting Dynamics CRM 2015 Data into Power BI [article]
Read more
  • 0
  • 0
  • 1833
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-r-and-its-diverse-possibilities
Packt
16 Dec 2016
11 min read
Save for later

R and its Diverse Possibilities

Packt
16 Dec 2016
11 min read
In this article by Jen Stirrup, the author of the book Advanced Analytics with R and Tableau, We will cover, with examples, the core essentials of R programming such as variables and data structures in R such as matrices, factors, vectors, and data frames. We will also focus on control mechanisms in R ( relational operators, logical operators, conditional statements, loops, functions, and apply) and how to execute these commands in R to get grips before proceeding to article that heavily rely on these concepts for scripting complex analytical operations. (For more resources related to this topic, see here.) Core essentials of R programming One of the reasons for R’s success is its use of variables. Variables are used in all aspects of R programming. For example, variables can hold data, strings to access a database, whole models, queries, and test results. Variables are a key part of the modeling process, and their selection has a fundamental impact on the usefulness of the models. Therefore, variables are an important place to start since they are at the heart of R programming. Variables In the following section we will deal with the variables—how to create variables and working with variables. Creating variables It is very simple to create variables in R, and to save values in them. To create a variable, you simply need to give the variable a name, and assign a value to it. In many other languages, such as SQL, it’s necessary to specify the type of value that the variable will hold. So, for example, if the variable is designed to hold an integer or a string, then this is specified at the point at which the variable is created. Unlike other programming languages, such as SQL, R does not require that you specify the type of the variable before it is created. Instead, R works out the type for itself, by looking at the data that is assigned to the variable. In R, we assign variables using an assignment variable, which is a less than sign (<) followed by a hyphen (-). Put together, the assignment variable looks like so: Working with variables It is important to understand what is contained in the variables. It is easy to check the content of the variables using the lscommand. If you need more details of the variables, then the ls.strcommand will provide you with more information. If you need to remove variables, then you can use the rm function. Data structures in R The power of R resides in its ability to analyze data, and this ability is largely derived from its powerful data types. Fundamentally, R is a vectorized programming language. Data structures in R are constructed from vectors that are foundational. This means that R’s operations are optimized to work with vectors. Vector The vector is a core component of R. It is a fundamental data type. Essentially, a vector is a data structure that contains an array where all of the values are the same type. For example, they could all be strings, or numbers. However, note that vectors cannot contain mixed data types. R uses the c() function to take a list of items and turns them into a vector. Lists R contains two types of lists: a basic list, and a named list. A basic list is created using the list() operator. In a named list, every item in the list has a name as well as a value. named lists are a good mapping structure to help map data between R and Tableau. In R, lists are mapped using the $ operator. Note, however, that the list label operators are case sensitive. Matrices Matrices are two-dimensional structures that have rows and columns. The matrices are lists of rows. It’s important to note that every cell in a matrix has the same type. Factors A factor is a list of all possible values of a variable in a string format. It is a special string type, which is chosen from a specified set of values known as levels. They are sometimes known as categorical variables. In dimensional modeling terminology, a factor is equivalent to a dimension, and the levels represent different attributes of the dimension. Note that factors are variables that can only contain a limited number of different values. Data frames The data frame is the main data structure in R. It’s possible to envisage the data frame as a table of data, with rows and columns. Unlike the list structure, the data frame can contain different types of data. In R, we use the data.frame() command in order to create a data frame. The data frame is extremely flexible for working with structured data, and it can ingest data from many different data types. Two main ways to ingest data into data frames involves the use of many data connectors, which connect to data sources such as databases, for example. There is also a command, read.table(), which takes in data. Data Frame Structure Here is an example, populated data frame. There are three columns, and two rows. The top of the data frame is the header. Each horizontal line afterwards holds a data row. This starts with the name of the row, and then followed by the data itself. Each data member of a row is called a cell. Here is an example data frame, populated with data: Example Data Frame Structure df = data.frame( Year=c(2013, 2013, 2013), Country=c("Arab World","Carribean States", "Central Europe"), LifeExpectancy=c(71, 72, 76)) As always, we should read out at least some of the data frame so we can double-check that it was set correctly. The data frame was set to the df variable, so we can read out the contents by simply typing in the variable name at the command prompt: To obtain the data held in a cell, we enter the row and column co-ordinates of the cell, and surround them by square brackets []. In this example, if we wanted to obtain the value of the second cell in the second row, then we would use the following: df[2, "Country"] We can also conduct summary statistics on our data frame. For example, if we use the following command: summary(df) Then we obtain the summary statistics of the data. The example output is as follows: You’ll notice that the summary command has summarized different values for each of the columns. It has identified Year as an integer, and produced the min, quartiles, mean, and max for year. The Country column has been listed, simply because it does not contain any numeric values. Life Expectancy is summarized correctly. We can change the Year column to a factor, using the following command: df$Year <- as.factor(df$Year) Then, we can rerun the summary command again: summary(df) On this occasion, the data frame now returns the correct results that we expect: As we proceed throughout this book, we will be building on more useful features that will help us to analyze data using data structures, and visualize the data in interesting ways using R. Control structures in R R has the appearance of a procedural programming language. However, it is built on another language, known as S. S leans towards functional programming. It also has some object-oriented characteristics. This means that there are many complexities in the way that R works. In this section, we will look at some of the fundamental building blocks that make up key control structures in R, and then we will move onto looping and vectorized operations. Logical operators Logical operators are binary operators that allow the comparison of values: Operator Description <  less than <= less than or equal to >  greater than >= greater than or equal to == exactly equal to != not equal to !x Not x x | y x OR y x & y x AND y isTRUE(x) test if X is TRUE For loops and vectorization in R Specifically, we will look at the constructs involved in loops. Note, however, that it is more efficient to use vectorized operations rather than loops, because R is vector-based. We investigate loops here, because they are a good first step in understanding how R works, and then we can optimize this understanding by focusing on vectorized alternatives that are more efficient. More information about control flows can be obtained by executing the command at the command line: Help?Control The control flow commands take decisions and make decisions between alternative actions. The main constructs are for, while, and repeat. For loops Let’s look at a for loop in more detail. For this exercise, we will use the Fisher iris dataset, which is installed along with R by default. We are going to produce summary statistics for each species of iris in the dataset. You can see some of the iris data by typing in the following command at the command prompt: head(iris) We can divide the iris dataset so that the data is split by species. To do this, we use the split command, and we assign it to the variable called IrisBySpecies: IrisBySpecies <- split(iris,iris$Species) Now, we can use a for loop in order to process the data in order to summarize it by species. Firstly, we will set up a variable called output, and set it to a list type. For each species held in the IrisBySpecies variable, we set it to calculate the minimum, maximum, mean, and total cases. It is then set to a data frame called output.df, which is printed out to the screen: output <- list() for(n in names(IrisBySpecies)){ ListData <- IrisBySpecies[[n]] output[[n]] <- data.frame(species=n, MinPetalLength=min(ListData$Petal.Length), MaxPetalLength=max(ListData$Petal.Length), MeanPetalLength=mean(ListData$Petal.Length), NumberofSamples=nrow(ListData)) output.df <- do.call(rbind,output) } print(output.df) The output is as follows: We used a for loop here, but they can be expensive in terms of processing. We can achieve the same end by using a function that uses a vector called Tapply. Tapply processes data in groups. Tapply has three parameters; the vector of data, the factor that defines the group, and a function. It works by extracting the group, and then applying the function to each of the groups. Then, it returns a vector with the results. We can see an example of tapply here, using the same dataset: output <- data.frame(MinPetalLength=tapply(iris$Petal.Length,iris$Species,min), MaxPetalLength=tapply(iris$Petal.Length,iris$Species,max), MeanPetalLength=tapply(iris$Petal.Length,iris$Species,mean), NumberofSamples=tapply(iris$Petal.Length,iris$Species,length)) print(output) This time, we get the same output as previously. The only difference is that by using a vectorized function, we have concise code that runs efficiently. To summarize, R is extremely flexible and it’s possible to achieve the same objective in a number of different ways. As we move forward through this book, we will make recommendations about the optimal method to select, and the reasons for the recommendation. Functions R has many functions that are included as part of the installation. In the first instance, let’s look to see how we can work smart by finding out what functions are available by default. In our last example, we used the split() function. To find out more about the split function, we can simply use the following command: ?split Or we can use: help(split) It’s possible to get an overview of the arguments required for a function. To do this, simply use the args command: args(split) Fortunately, it’s also possible to see examples of each function by using the following command: example(split) If you need more information than the documented help file about each function, you can use the following command. It will go and search through all the documentation for instances of the keyword: help.search("split") If you  want to search the R project site from within RStudio, you can use the RSiteSearch command. For example: RSiteSearch("split") Summary In this article, we have looked at various essential structures in working with R. We have looked at the data structures that are fundamental to using R optimally. We have also taken the view that structures such as for loops can often be done better as vectorized operations. Finally, we have looked at the ways in which R can be used to create functions in order to simply code. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Creating your first heat map in R [article] Data Modelling Challenges [article]
Read more
  • 0
  • 0
  • 2982

article-image-creating-a-simple-level-select-screen
Gareth Fouche
16 Dec 2016
6 min read
Save for later

Creating a Simple Level Select Screen

Gareth Fouche
16 Dec 2016
6 min read
For many types of games, whether multiplayer FPS or 2D platformer, it is desirable to present the player with a list of levels they can play. Sonic the Hedgehog © Sega This tutorial will guide you through creating a simple level select screen in Unity. For the first step, we need to create some simple test levels for the player to select from. From the menu, select File | New Scene to create a new scene with a main Camera and Directional Light. Then, from the hierarchy view, select Create | 3D Object | Plane to place a ground plane. Select Create | 3D Object | Cube to place a cube in the scene. Copy and paste that cube a few times, arranging the cubes on the plane and positioning the camera until you have a basic level layout as follows: Save this Scene as “CubeWorld”, our 1st level. Create another scene and repeat the above process, but instead of cubes, place spheres. Save this scene as “SphereWorld”, our second game level: We will need preview images of each level for our level select screen. Take a screenshot of each scene, open up any image editor, paste your image, and resize/crop that image until it is 400 x 180 pixels. Do this for both the levels, save them as “CubeWorld.jpg” and “SphereWorld.jpg”, and then pull those images into your project. In the import settings, make sure to set the Texture Type for the images to Sprite (2D and UI): Now that we have the game levels, it’s time to create the Level Select scene. As before, create a new empty scene and name it “LevelSelectMenu”. This time, select Create | UI | Canvas. This will create the canvas object that is the root of our GUI. In an image editor, create a small 10 x 10 pixel image, fill it with black and save it as “Background.jpg”. Drag it into the project, setting its image settings, as before, to Sprite (2D and UI). Now, from the Create | UI menu, create an Image. Drag “Background.jpg” from the Project pane into the Image component’s Source Image field. Set the Width and the Height to 2000 pixels, this should be enough to cover the entire canvas. From the same UI menu, create a Text component. In the inspector, set the Width and the Height of that Text to 300 x 80 pixels. Under the Text property, enter “Select Level”, and then set the Font Size to 50 and Color to white: Using the transform controls, drag the text to the upper middle area of the screen. If you can’t see the Text, make sure it is positioned below the Image under Canvas in the Hierarchy view. Order matters; the top most child of Canvas will be rendered first, then the second, and so on. So, make sure your background image isn’t being drawn over your Text. Now, from the UI menu, create a Button again. Make this button 400 pixels wide and 180 pixels high. Drag “CubeWorld.jpg” into the Image component’s Source Image field from the Project pane. This will make it the button image: Edit the button Text to say “Cube World” and set the Font Size to 30. Change the Font Colour to be white. Now, in the inspector view, reposition the text to the bottom left corner of the button using the transform controls: Update the Button’s Color values as in the image below. These values will tint the button image in certain states. Normal is the default, Highlighted is for when the mouse is over the button, Pressed is for when the button is pressed, Disabled is for when the button is not interactable (the Interactable checkbox is unticked): Now duplicate the first button, but this time, use the “SphereWorld.jpg” image as the Source Image, and set the text to “Sphere World”. Using the transform controls, position the two buttons next to each other under the “Select Level” text on the canvas. It should look like this: If we run the app now, we’ll see this screen and be able to click on each level button, but nothing will happen. To actually load a level, we need to first create a new script. Right click in the Project view and select Create | C# Script. Name this script “LevelSelect”. Create a new GameObject in the scene, rename it “LevelSelectManager”, and drag the LevelSelect script onto that GameObject in the Hierarchy. Now, open up the script in an IDE and change the code to be as follows: What this script does is define a script, LevelSelect, which exposes a single function, LoadLevel(). LoadLevel() takes a string, that is, the level name and tells Unity to load that level (a Unity scene) by calling SceneManager.LoadScene(). However, we still need to actually call this function when the buttons are pressed. Back in Unity, go back to the CubeWorld button in the Hierarchy. Under the Button Script in the Inspector, there is an entry for “On Click ()” with a plus sign under it: Click the plus sign to add the event that will be called when the button is clicked. Once the event is added, we need to fill out the details that tell it which function to call on what scene GameObject. Find where it says “None (Object)” under “On Click ()”. Drag the “LevelSelectManager” GameObject from the Hierarchy view into that field. Then click the “No Function” dropdown, which will display a list of component classes matching the components on our “LevelSelectManager” GameObject. Choose “LevelSelect” (because that’s the script class our function is defined in) and then “LoadLevel (string)” to choose the function we wrote in C# previously. Now we just have to pass the level name string we want to load to that function. To do that, write “CubeWorld” (the name of the scene/level we want to load) in the empty field text box. Once you’re done, the “On Click ()” event should look like this: Now, repeat the process for the SphereWorld button as above, but instead of entering “CubeWorld” as the string to pass the LoadLevel function, enter “SphereWorld”. Almost done! Finally, save the “LevelSelectMenu” scene, and then click File | Build Settings. Make sure that all the three scenes are loaded into the “Scenes In Build” list. If they aren’t, drag the scenes into the list from the Project pane. Make sure that the “LevelSelect” scene is first so that when the app is run, it is the scene that will be loaded up first: It’s time to build and run your program! You should be greeted by the Level Select Menu, and, depending on which level you select, it’ll load the appropriate game level, either CubeWorld or SphereWorld. Now you can customize it further, adding more levels, making the level select screen look nicer with better graphical assets and effects, and, of course, adding actual gameplay to your levels. Have fun! About the author Gareth Fouche is a game developer. He can be found on Github @GarethNN.
Read more
  • 0
  • 2
  • 14775

article-image-gathering-and-analyzing-stock-market-data-r-part-2
Erik Kappelman
15 Dec 2016
8 min read
Save for later

Gathering and analyzing stock market data with R, Part 2

Erik Kappelman
15 Dec 2016
8 min read
Welcome to the second installment of this series. The previous post covered collecting real-time stock market data using R. This second part looks at a few ways to analyze historical stock market data using R. If you are just interested in learning how to analyze historical data, the first blog isn’t necessary. The code accompanying these blogs is located here. To begin, we must first get some data. The lines of code below load the ‘quantmod’ library, a very useful R library when it comes to financial analysis, and then use quantmod to gather data on the list of stock symbols: library(quantmod) syms<-read.table("NYSE.txt",header=2,sep="t") smb<-grep("[A-Z]{4}",syms$Symbol,perl= F, value = T) getSymbols(smb) I find the getSymbols() function somewhat problematic for gathering data on multiple companies. This is because the function creates a separate dataframe for each company in the package’s ‘xts’ format. I think this would be more helpful if you were planning to use the quantmod tools for analysis. I enjoy using other types of tools, so the data needs to be changed somewhat, before I can analyze it: mat<- c() stocks<- c() stockList<- list() names<- c() for(i in1:length(smb)){ temp<-get(smb[i]) names<- c(names,smb[i]) stockList[[i]]<-as.numeric(getPrice(temp)) len<- length(attributes(temp)$index) if(len<ten01)next stocks<- c(stocks,smb[i]) temp2 <-temp[(len-ten00):len] vex<-as.numeric(getPrice(temp2)) mat<-rbind(mat,vex) } The code above loops through the dataframes that were created by the getSymbols() function. Using the get() function from the ‘base’ package, each symbol string is used to grab the symbol’s dataframe. The loop then does one or two more things to each stock’s dataframe. For all of the stocks, it records the stock’s symbol in a vector and adds a vector of prices to the growing list of stock data. If the stock data goes back at least one thousand trading days, then the last one thousand days of trading are added to a matrix. The reason for this distinction is that we will be looking at two methods of analysis. One requires all of the series to be the same length, and the other is length-agnostic. Series that are too short will not be analyzed using the first method. Check out the following script: names(stockList)<- names stock.mat<-as.matrix(mat) row.names(stock.mat)<- stocks colnames(stock.mat)<-as.character(index(temp2)) save(stock.mat,stockList,file="StockData.rda") rm(list =ls()) The above script names the data properly and saves the data to an R data file. The final line of code cleans the workspace because the getSymbols() function leaves quite a mess. The data is now in the correct format for us to begin our analysis. It is worth pointing out that what I am about to show won’t get you an A in most statistics or economics classes. I say this because I am going to take a very practical approach with little regard to the proper assumptions. Although these assumptions are important, when the need is an accurate forecast, it is easier to get away with models that are not entirely theoretically sound. This is because we are not trying to make arguments of causality or association; we are trying to guess the direction of the market: library(mclust) library(vars) load("StockData.rda") In this first example of analysis, I put forth a clustering-based Vector Autoregression (VAR) method of my own design. In order to do this, we must load the correct packages and load the data we just created: cl<-Mclust(stock.mat,G=1:9) stock.mat<-cbind(stock.mat,cl$classification) The first thing to do is identify the clusters that exist within the stock market data. In this case, we use a model-based clustering method. This method assumes that data is the result of picks from a set of random variable distributions. This allows the clusters to be based on the covariance of companies’ stock prices instead of just grouping together companies with similar nominal prices. The Mclust() function fits a model to the data that minimizes the Bayesian Information Criterion (BIC). You will likely have to restrict the number of clusters as one complaint about model-based clustering is a ‘more clusters are always better’ result. The data is separated into clusters to make using a VAR technique more computationally realistic. One of the nice things about VAR is how few assumptions must be met in order to include the time series in an analysis. Also, VAR regresses several time series against one another and themselves at the same time, which may capture more of the covariance needed to produce reliable forecasts. We are looking at over 1000 time series and this is too many to use VAR effectively, so the clustering is used to group the time series together to produce smaller VARs: cluster<-stock.mat[stock.mat[,ten02]==6,1:ten01] ts<-ts(t(cluster)) fit <-VAR(ts[1:(ten01-ten),],p =ten) preds<- predict(fit,n.ahead=ten) forecast<-preds$fcst$TEVA plot.ts(ts[950:ten01,8],ylim=c(36,54)) lines(y = forecast[,1], x =(50-9):50,col ="blue") lines(y = forecast[,2], x =(50-9):50,col ="red",lty=2) lines(y = forecast[,3], x =(50-9):50,col ="red",lty=2) The code above takes the time series that belong to the ‘6’ cluster and runs a VAR that looks back ten steps. We cut off the last ten days of data and use the VAR to predict these last ten days. The script then plots the predicted ten days against the actual ten days. This allows us to see if the predictions are functioning properly. The resulting plot shows that the predictions are not perfect but will probably work well enough: for(i in1:8){ assign(paste0("cluster.",i),stock.mat[stock.mat[,ten02]== i,1:ten01]) assign(paste0("ts.",i),ts(t(get(paste0("cluster.",i))))) temp<-get(paste0("ts.",i)) assign(paste0("fit.",i),VAR(temp, p =ten)) assign(paste0("preds.",i),predict(get(paste0("fit.",i)),n.ahead=ten)) } stock.mat<-cbind(stock.mat,0) for(j in1:8){ pred.vec<-c() temp<-get(paste0("preds.",j)) for(i intemp$fcst){ cast<-temp$fcst[1] cast<- cast[[1]] cast<- cast[ten,] pred.vec<-c(pred.vec,cast[1]) } stock.mat[stock.mat[,ten02]== j,ten03]<-pred.vec } The loops above perform a VAR on each of the 8 clusters with more than one member. After these VARs are performed, a ten-day forecast is carried out. The value of each stock at the end of the ten-day forecast is then appended onto the end of the stock data matrix: stock.mat<-stock.mat[stock.mat[,ten02]!=9,] stock.mat[,ten04]<-(stock.mat[,ten03]-stock.mat[,ten01])/stock.mat[,ten01]*ten0 stock.mat<-stock.mat[order(-stock.mat[,ten04]),] stock.mat[1:ten,ten04] rm(list =ls()) The final lines of code calculate the percentage change in each stock forecasted after 10days and then display the top 10stocks in terms of forecasted percentage change. The workspace is then cleared: load("StockData.rda") library(forecast) forecasts<- c() names<- c() for(i in1:length(stockList)){ mod<-auto.arima(stockList[[i]]) cast<- forecast(mod) cast<-cast$mean[ten] temp<- c(as.numeric(stockList[[i]][length(stockList[[i]])]),as.numeric(cast)) forecasts<-rbind(forecasts,temp) names<- c(names,names(stockList[i])) } forecasts<- matrix(ncol=2,forecasts) forecasts<-cbind(forecasts,(forecasts[,2]- forecasts[,1])/forecasts[,1]*ten0) colnames(forecasts)<- c("Price","Forecast","% Change") row.names(forecasts)<- names forecasts<- forecasts[order(-forecasts[,3]),] rm(list =ls()) The final bit of code is simpler. Using the ‘forecast’ package’s auto.arima() function,we fit an ARIMA model to each stock in our stockList. The auto.arima() function is a must-have for forecasters using R. This function fits an ARIMA model to your data with the best value of some measure of statistical accuracy. The default is the corrected Akaike Information Criterion (ACCc), which will work fine for our purposes. Once the forecasts are complete, this script also prints the top 10stocks in terms of percentage change over a ten-day forecast. These blogs have discussed how to gather and analyze stock market data using R. I hope they have been informative and will help you with data analysis in the future. About the author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 4397

article-image-provision-iaas-terraform
Packt
14 Dec 2016
9 min read
Save for later

Provision IaaS with Terraform

Packt
14 Dec 2016
9 min read
 In this article by Stephane Jourdan and Pierre Pomes, the authors of Infrastructure as Code (IAC) Cookbook, the following sections will be covered: Configuring the Terraform AWS provider Creating and using an SSH key pair to use on AWS Using AWS security groups with Terraform Creating an Ubuntu EC2 instance with Terraform (For more resources related to this topic, see here.) Introduction A modern infrastructure often usesmultiple providers (AWS, OpenStack, Google Cloud, Digital Ocean, and many others), combined with multiple external services (DNS, mail, monitoring, and others). Many providers propose their own automation tool, but the power of Terraform is that it allows you to manage it all from one place, all using code. With it, you can dynamically create machines at two IaaS providers depending on the environment, register their names at another DNS provider, and enable monitoring at a third-party monitoring company, while configuring the company GitHub account and sending the application logs to an appropriate service. On top of that, it can delegate configuration to those who do it well (configuration management tools such as Chef, Puppet, and so on),all with the same tool. The state of your infrastructure is described, stored, versioned, and shared. In this article, we'll discover how to use Terraform to bootstrap a fully capable infrastructure on Amazon Web Services (AWS), deploying SSH key pairs and securing IAM access keys. Configuring the Terraform AWS provider We can use Terraform with many IaaS providers such as Google Cloud or Digital Ocean. Here we'll configure Terraform to be used with AWS. For Terraform to interact with an IaaS, it needs to have a provider configured. Getting ready To step through this section, you will need the following: An AWS account with keys A working Terraform installation An empty directory to store your infrastructure code An Internet connection How to do it… To configure the AWS provider in Terraform, we'll need the following three files: A file declaring our variables, an optional description, and an optional default for each (variables.tf) A file setting the variables for the whole project (terraform.tfvars) A provider file (provider.tf) Let's declare our variables in the variables.tf file. We can start by declaring what's usually known as the AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY environment variables: variable "aws_access_key" { description = "AWS Access Key" } variable "aws_secret_key" { description = "AWS Secret Key" } variable "aws_region" { default = "eu-west-1" description = "AWS Region" } Set the two variables matching the AWS account in the terraform.tfvars file. It's not recommended to check this file into source control: it's better to use an example file instead (that is: terraform.tfvars.example). It's also recommended that you use a dedicated Terraform user for AWS, not the root account keys: aws_access_key = "< your AWS_ACCESS_KEY >" aws_secret_key = "< your AWS_SECRET_KEY >" Now, let's tie all this together into a single file—provider.tf: provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" } Apply the following Terraform command: $ terraform apply Apply complete! Resources: 0 added, 0 changed, 0 destroyed. It only means the code is valid, not that it can really authenticate with AWS (try with a bad pair of keys). For this, we'll need to create a resource on AWS. You now have a new file named terraform.tfstate that has been created at the root of your repository. This file is critical: it's the stored state of your infrastructure. Don't hesitate to look at it, it's a text file. How it works… This first encounter with HashiCorp Configuration Language (HCL), the language used by Terraform, looks pretty familiar: we've declared variables with an optional description for reference. We could have declared them simply with the following: variable "aws_access_key" { } All variables are referenced to use the following structure: ${var.variable_name} If the variable has been declared with a default, as our aws_region has been declared with a default of eu-west-1, this value will be used if there's no override in the terraform.tfvars file. What would have happened if we didn't provide a safe default for our variable? Terraform would have asked us for a value when executed: $ terraform apply var.aws_region AWS Region Enter a value: There's more… We've used values directly inside the Terraform code to configure our AWS credentials. If you're already using AWS on the command line, chances are you already have a set of standard environment variables: $ echo ${AWS_ACCESS_KEY_ID} <your AWS_ACCESS_KEY_ID> $ echo ${AWS_SECRET_ACCESS_KEY} <your AWS_SECRET_ACCESS_KEY> $ echo ${AWS_DEFAULT_REGION} eu-west-1 If not, you can simply set them as follows: $ export AWS_ACCESS_KEY_ID="123" $ export AWS_SECRET_ACCESS_KEY="456" $ export AWS_DEFAULT_REGION="eu-west-1" Then Terraform can use them directly, and the only code you have to type would be to declare your provider! That's handy when working with different tools. The provider.tffile will then look as simple as this: provider "aws" { } Creating and using an SSH key pair to use on AWS Now we have our AWS provider configured in Terraform, let's add a SSH key pair to use on a default account of the virtual machines we intend to launch soon. Getting ready To step through this section, you will need the following: A working Terraform installation An AWS provider configured in Terraform Generate a pair of SSH keys somewhere you remember. An example can be under the keys folder at the root of your repo: $ mkdir keys $ ssh-keygen -q -f keys/aws_terraform -C aws_terraform_ssh_key -N '' An Internet connection How to do it… The resource we want for this is named aws_key_pair. Let's use it inside a keys.tf file, and paste the public key content: resource "aws_key_pair""admin_key" { key_name = "admin_key" public_key = "ssh-rsa AAAAB3[…]" } This will simply upload your public key to your AWS account under the name admin_key: $ terraform apply aws_key_pair.admin_key: Creating... fingerprint: "" =>"<computed>" key_name: "" =>"admin_key" public_key: "" =>"ssh-rsa AAAAB3[…]" aws_key_pair.admin_key: Creation complete Apply complete! Resources: 1 added, 0 changed, 0 destroyed. If you manually navigate to your AWS account, under EC2 |Network & Security | Key Pairs, you'll now find your key: Another way to use our key with Terraform and AWS would be to read it directly from the file, and that would show us how to use file interpolation with Terraform. To do this, let's declare a new empty variable to store our public key in variables.tf: variable "aws_ssh_admin_key_file" { } Initialize the variable to the path of the key in terraform.tfvars: aws_ssh_admin_key_file = "keys/aws_terraform" Now let's use it in place of our previous keys.tf code, using the file() interpolation: resource "aws_key_pair""admin_key" { key_name = "admin_key" public_key = "${file("${var.aws_ssh_admin_key_file}.pub")}" } This is a much clearer and more concise way of accessing the content of the public key from the Terraform resource. It's also easier to maintain, as changing the key will only require to replace the file and nothing more. How it works… Our first resource, aws_key_pair takes two arguments (a key name and the public key content). That's how all resources in Terraform work. We used our first file interpolation, using a variable, to show how to use a more dynamic code for our infrastructure. There's more… Using Ansible, we can create a role to do the same job. Here's how we can manage our EC2 key pair using a variable, under the name admin_key. For simplification, we're using here the three usual environment variables—AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION: Here's a typical Ansible file hierarchy: ├── keys │ ├── aws_terraform │ └── aws_terraform.pub ├── main.yml └── roles └── ec2_keys └── tasks └── main.yml In the main file (main.yml), let's declare that our host (localhost) will apply the role dedicated to manage our keys: --- - hosts: localhost roles: - ec2_keys In the ec2_keys main task file, create the EC2 key (roles/ec2_keys/tasks/main.yml): --- - name: ec2 admin key ec2_key: name: admin_key key_material: "{{ item }}" with_file: './keys/aws_terraform.pub' Execute the code with the following command: $ ansible-playbook -i localhost main.yml TASK [ec2_keys : ec2 admin key] ************************************************ ok: [localhost] => (item=ssh-rsa AAAA[…] aws_terraform_ssh) PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0   Using AWS security groups with Terraform Amazon's security groups are similar to traditional firewalls, with ingress and egress rules applied to EC2 instances. These rules can be updated on-demand. We'll create an initial security group allowing ingress Secure Shell (SSH) traffic only for our own IP address, while allowing all outgoing traffic. Getting ready To step through this section, you will need the following: A working Terraform installation An AWS provider configured in Terraform An Internet connection How to do it… The resource we're using is called aws_security_group. Here's the basic structure: resource "aws_security_group""base_security_group" { name = "base_security_group" description = "Base Security Group" ingress { } egress { } } We know we want to allow inbound TCP/22 for SSH only for our own IP (replace 1.2.3.4/32 by yours!), and allow everything outbound. Here's how it looks: ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["1.2.3.4/32"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } You can add a Name tag for easier reference later: tags { Name = "base_security_group" } Apply this and you're good to go: $ terraform apply aws_security_group.base_security_group: Creating... […] aws_security_group.base_security_group: Creation complete Apply complete! Resources: 1 added, 0 changed, 0 destroyed. You can see your newly created security group by logging into the AWS Console and navigating to EC2 Dashboard|Network & Security|Security Groups: Another way of accessing the same AWS console information is through the AWS command line: $ aws ec2 describe-security-groups --group-names base_security_group {...} There's more… We can achieve the same result using Ansible. Here's the equivalent of what we just did with Terraform in this section: --- - name: base security group ec2_group: name: base_security_group description: Base Security Group rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 1.2.3.4/32 Summary In this article, you learnedhow to configure the Terraform AWS provider, create and use an SSH key pair to use on AWS, and use AWS security groups with Terraform. Resources for Article: Further resources on this subject: Deploying Highly Available OpenStack [article] Introduction to Microsoft Azure Cloud Services [article] Concepts for OpenStack [article]
Read more
  • 0
  • 0
  • 16753
article-image-uitableview-touch
Packt
14 Dec 2016
24 min read
Save for later

UITableView Touch Up

Packt
14 Dec 2016
24 min read
 In this article by Donny Wals, from the book Mastering iOS 10 Programming, we will go through UITableView touch up. Chances are that you have built a simple app before. If you have, there's a good probability that you have used UITableView. The UITableView is a core component in many applications. Virtually all applications that display a list of some sort make use of UITableView. Because UITableView is such an important component in the world of iOS I want you to dive in to it straight away. You may or may not have looked at UITableView before but that's okay. You'll be up to speed in no time and you'll learn how this component achieves that smooth 60 frames per seconds (fps) scrolling that users know and love. If your app can maintain a steady 60,fps your app will feel more responsive and scrolling will feel perfectly smooth to users which is exactly what you want. We'll also cover new UITableView features that make it even easier to optimize your table views. In addition to covering the basics of UITableView, you'll learn how to make use of Contacts.framework to build an application that shows a list of your users' contacts. This is similar to what the native Contacts app does on iOS. The contacts in the UITableView component will be rendered in a custom cell. You will learn how to create such a cell, using Auto Layout. Auto Layout is a technique that will be covered throughout this book because it's an important part of every iOS developer's tool belt. If you haven't used Auto Layout before, that's okay. This article will cover a few basics, and the layout is relatively simple, so you can gradually get used to it as we go. To sum it all up, this article covers: Configuring and displaying UITableView Fetching a user's contacts through Contacts.framework UITableView delegate and data source (For more resources related to this topic, see here.) Setting up the User Interface (UI) Every time you start a new project in Xcode, you have to pick a template for your application. These templates will provide you with a bit of boiler plate code or sometimes they will configure a very basic layout for you. Throughout this book, the starting point will always be the Single View Application template. This template will provide you with a bare minimum of boilerplate code. This will enable you to start from scratch every time, and it will boost your knowledge of how the Xcode-provided templates work internally. In this article, you'll create an app called HelloContacts. This is the app that will render your user's contacts in a UITableView. Create a new project by selecting File | New | Project. Select the Single View template, give your project a name (HelloContacts), and make sure you select Swift as the language for your project. You can uncheck all CoreData- and testing-related checkboxes; they aren't of interest right now. Your configuration should resemble the following screenshot: Once you have your app configured, open the Main.storyboard file. This is where you will create your UITableView and give it a layout. Storyboards are a great way for you to work on an app and have all of the screens in your app visualized at once. If you have worked with UITableView before, you may have used UITableViewController. This is a subclass of UIViewController that has a UITableView set as its view. It also abstracts away some of the more complex concepts of UITableView that we are interested in. So, for now, you will be working with the UIViewController subclass that holds UITableView and is configured manually. On the right hand side of the window, you'll find the Object Library. The Object Library is opened by clicking on the circular icon that contains a square in the bottom half of the sidebar.  In the Object Library, look for a UITableView. If you start typing the name of what you're looking for in the search box, it should automatically be filtered out for you. After you locate it, drag it into your app's view. Then, with UITableView selected, drag the white squares to all corners of the screen so that it covers your entire viewport. If you go to the dynamic viewport inspector at the bottom of the screen by clicking on the current device name as shown in the following screenshot and select a larger device such as an iPad or a smaller device such as the iPhone SE, you will notice that UITableView doesn't cover the viewport as nicely. On smaller screens, the UITableView will be larger than the viewport. On larger screens, UITableView simply doesn't stretch: This is why we will use Auto Layout. Auto Layout enables you to create layouts that will adapt to different viewports to make sure it looks good on all of the devices that are currently out there. For UITableView, you can pin the edges of the table to the edges of the superview, which is the view controller's main view. This will make sure the table stretches or shrinks to fill the screen at all times. Auto Layout uses constraints to describe layouts. UITableView should get some constraints that describe its relation to the edges of your view controller. The easiest way to add these constraints is to let Xcode handle it for you. To do this, switch the dynamic viewport inspector back to the view you initially selected. First, ensure UITableView properly covers the entire viewport, and then click on the Resolve Auto Layout Issues button at the bottom-right corner of the screen and select Reset to Suggested Constraints: This button automatically adds the required constraints for you. The added constraints will ensure that each edge of UITableView sticks to the corresponding edge of its superview. You can manually inspect these constraints in the Document Outline on the left-hand side of the Interface Builder window. Make sure that everything works by changing the preview device in the dynamic viewport inspector again. You should verify that no matter which device you choose now, the table will stretch or shrink to cover the entire view at all times. Now that you have set up your project with UITableView added to the interface and the constraints have been added, it's time to start writing some code. The next step is to use Contacts.framework to fetch some contacts from your user's address book. Fetching a user's contacts In the introduction for this article, it was mentioned that we would use Contacts.framework to fetch a user's contacts list and display it in UITableView. Before we get started, we need to be sure we have access to the user's address book. In iOS 10, privacy is a bit more restrictive than it was earlier. If you want to have access to a user's contacts, you need to specify this in your application's Info.plist file. If you fail to specify the correct keys for the data your application uses, it will crash without warning. So, before attempting to load your user's contacts, you should take care of adding the correct key to Info.plist. To add the key, open Info.plist from the list of files in the Project Navigator on the left and hover over Information Property List at the top of the file. A plus icon should appear, which will add an empty key with a search box when you click on it. If you start typing Privacy – contacts, Xcode will filter options until there is just one left, that is, the key for contact access. In the value column, fill in a short description about what you are going to use this access for. In our app, something like read contacts to display them in a list should be sufficient. Whenever you need access to photos, Bluetooth, camera, microphone, and more, make sure you check whether your app needs to specify this in its Info.plist. If you fail to specify a key that's required, your app will crash and will not make it past Apple's review process. Now that you have configured your app to specify that it wants to be able to access contact data, let's get down to writing some code. Before reading the contacts, you'll need to make sure the user has given permission to access contacts. You'll have to check this first, after which the code should either fetch contacts or it should ask the user for permission to access the contacts. Add the following code to ViewController.swift. After doing so, we'll go over what this code does and how it works: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() let store = CNContactStore() if CNContactStore.authorizationStatus(for: .contacts) == .notDetermined { store.requestAccess(for: .contacts, completionHandler: {[weak self] authorized, error in if authorized { self?.retrieveContacts(fromStore: store) } }) } else if CNContactStore.authorizationStatus(for: .contacts) == .authorized { retrieveContacts(fromStore: store) } } func retrieveContacts(fromStore store: CNContactStore) { let keysToFetch = [CNContactGivenNameKey as CNKeyDescriptor, CNContactFamilyNameKey as CNKeyDescriptor, CNContactImageDataKey as CNKeyDescriptor, CNContactImageDataAvailableKey as CNKeyDescriptor] let containerId = store.defaultContainerIdentifier() let predicate = CNContact.predicateForContactsInContainer(withIdentifier: containerId) let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) } } In the viewDidLoad method, we will get an instance of CNContactStore. This is the object that will access the user's contacts database to fetch the results you're looking for. Before you can access the contacts, you need to make sure that the user has given your app permission to do so. First, check whether the current authorizationStatus is equal to .notDetermined. This means that we haven't asked permission yet and it's a great time to do so. When asking for permission, we pass a completionHandler. This handler is called a closure. It's basically a function without a name that gets called when the user has responded to the permission request. If your app is properly authorized after asking permission, the retrieveContacts method is called to actually perform the retrieval. If the app already had permission, we'll call retrieveContacts right away. Completion handlers are found throughout the Foundation and UIKit frameworks. You typically pass them to methods that perform a task that could take a while and is performed parallel to the rest of your application so the user interface can continue running without waiting for the result. A simplified implementation of such a function could look like this: func doSomething(completionHandler: Int -> Void) { // perform some actions
 var resultingValue = theResultOfSomeAction() completionHandler(resultingValue)
 } You'll notice that actually calling completionHandler looks identical to calling an ordinary function or method. The idea of such a completion handler is that we can specify a block of code, a closure, that is supposed to be executed at a later time. For example, after performing a task that is potentially slow. You'll find plenty of other examples of callback handlers and closures throughout this book as it's a common pattern in programming. The retrieveContacts method in ViewController.swift is responsible for actually fetching the contacts and is called with a parameter named store. It's set up like this so we don't have to create a new store instance since we already created one in viewDidLoad. When fetching a list of contacts, you use a predicate. We won't go into too much detail on predicates and how they work yet. The main goal of the predicate is to establish a condition to filter the contacts database on. In addition to a predicate, you also provide a list of keys your code wants to fetch. These keys represent properties that a contact object can have. They represent data, such as e-mail, phone numbers, names, and more. In this example, you only need to fetch a contact's given name, family name, and contact image. To make sure the contact image is available at all, there's a key request for that as well. When everything is configured, a call is made to unifiedContacts(matching:, keysToFetch:). This method call can throw an error, and since we're currently not interested in the error, try! is used to tell the compiler that we want to pretend the call can't fail and if it does, the application should crash. When you're building your own app, you might want to wrap this call in do {} catch {} block to make sure that your app doesn't crash if errors occur. If you run your app now, you'll see that you're immediately asked for permission to access contacts. If you allow this, you will see a list of contacts printed in the console. Next, let's display some content information in the contacts table! Creating a custom UITableViewCell for our contacts To display contacts in your UITableView, you will need to set up a few more things. First and foremost, you'll need to create a UITableViewCell that displays contact information. To do this, you'll create a custom UITableViewCell by creating a subclass. The design for this cell will be created in Interface Builder, so you're going to add @IBOutlets in your UITableViewCell subclass. These @IBOutlets are the connection points between Interface Builder and your code. Designing the contact cell The first thing you need to do is drag a UITableViewCell out from the Object Library and drop it on top of UITableView. This will add the cell as a prototype. Next, drag out UILabel and a UIImageView from the Object Library to the newly added UITableViewCell, and arrange them as they are arranged in the following figure. After you've done this, select both UILabel and UIImage and use the Reset to Suggested Constraints option you used earlier to lay out UITableView. If you have both the views selected, you should also be able to see the blue lines that are visible in following screenshot: These blue lines represent the constraints that were added to lay out your label and image. You can see a constraint that offsets the label from the left side of the cell. However, there is also a constraint that spaces the label and the image. The horizontal line through the middle of the cell is a constraint that vertically centers the label and image inside of the cell. You can inspect these constraints in detail in the Document Outline on the right. Now that our cell is designed, it's time to create a custom subclass for it and hook up @IBOutlets. Creating the cell subclass To get started, create a new file (File | New | File…) and select a Cocoa Touch file. Name the file ContactTableViewCell and make sure it subclasses UITableViewCell, as shown in the following screenshot: When you open the newly created file, you'll see two methods already added to the template for ContactTableViewCell.swift: awakeFromNib and setSelected(_:animated:). The awakeFromNib method is called the very first time this cell is created; you can use this method to do some initial setup that's required to be executed only once for your cell. The other method is used to customize your cell when a user taps on it. You could, for instance, change the background color or text color or even perform an animation. For now, you can delete both of these methods and replace the contents of this class with the following code: @IBOutlet var nameLabel: UILabel! @IBOutlet var contactImage: UIImageView! The preceding code should be the entire body of the ContactTableViewCell class. It creates two @IBOutlets that will allow you to connect your prototype cell with so that you can use them in your code to configure the contact's name and image later. In the Main.storyboard file, you should select your cell, and in the Identity Inspector on the right, set its class to ContactTableViewCell (as shown in the following screenshot). This will make sure that Interface Builder knows which class it should use for this cell, and it will make the @IBOutlets available to Interface Builder. Now that our cell has the correct class, select the label that will hold the contact's name in your prototype cell and click on Connections Inspector. Then, drag a new referencing outlet from the Connections Inspector to your cell and select nameLabel to connect the UILabel in the prototype cell to @IBOutlet in the code (refer to the following screenshot). Perform the same steps for UIImageView and select the contactImage option instead of nameLabel. The last thing we need to do is provide a reuse identifier for our cell. Click on Attributes Inspector after selecting your cell. In Attributes Inspector, you will find an input field for the Identifier. Set this input field to ContactTableViewCell. The reuse identifier is the identifier that is used to inform the UITableView about the cell it should retrieve when it needs to be created. Since the custom UITableViewCell is all set up now, we need to make sure UITableView is aware of the fact that our ViewController class will be providing it with the required data and cells. Displaying the list of contacts When you're implementing UITableView, it's good to be aware of the fact that you're actually working with a fairly complex component. This is why we didn't pick a UITableViewController at the beginning of this article. UITableViewController does a pretty good job of hiding the complexities of UITableView from thedevelopers. The point of this article isn't just to display a list of contacts; it's purpose is also to introduce some advanced concepts about a construct that you might have seen before, but never have been aware of. Protocols and delegation  Throughout the iOS SDK and the Foundation framework the delegate design pattern is used. Delegation provides a way for objects to have some other object handle tasks on their behalf. This allows great decoupling of certain tasks and provides a powerful way to allow communication between objects. The following image visualizes the delegation pattern for a UITableView component and its UITableViewDataSource: The UITableView uses two objects that help in the process of rendering a list. One is called the delegate, the other is called the data source. When you use a UITableView, you need to explicitly  configure the data source and delegate properties. At runtime, the UITableView will call methods on its delegate and data source in order to obtain information about cells, handle interactions and more. If you look at the documentation for the UITableView delegate property it will tell you that its type is UITableViewDelegate?. This means that the delegate's type is UITableViewDelegate. The question mark indicates that this value could be nil; we call this an Optional. The reason for the delegate to be Optional is that it might not ever be set at all. Diving deeper into what this UITableViewDelegate is exactly, you'll learn that it's actually a protocol and not a class or struct. A protocol provides a set of properties and/or methods that any object that conforms to (or adopts) this protocol must implement. Sometimes a protocol will provide optional methods, such as UITableViewDelegate does. If this is the case, we can choose which delegate methods we want to implement and which method we want to omit. Other protocols mandatory methods. The UITableViewDataSource has a couple of mandatory methods to ensure that a data source is able to provide UITableView with the minimum amount of information needed in order to render the cells you want to display. If you've never heard of delegation and protocols before, you might feel like this is all a bit foreign and complex. That's okay; throughout this book you'll gain a deeper understanding of protocols and how they work. In particular, the next section, where we'll cover swift and protocol-oriented programming, should provide you with a very thorough overview of what protocols are and how they work. For now, it's important to be aware that a UITableView always asks another object for data through the UITableViewDataSource protocol and their interactions are handled though the UITableViewDelegate. If you were to look at what UITableView does when it's rendering contents it could be dissected like this: UITableView needs to reload the data. UITableView checks whether it has a delegate; it asks the dataSource for the number of sections in this table. Once the delegate responds with the number of sections, the table view will figure out how many cells are required for each section. This is done by asking the dataSource for the number of cells in each section. Now that the cell knows the amount of content it needs to render, it will ask its data source for the cells that it should display. Once the data source provides the required cells based on the number of contacts, the UITableView will request display the cells one by one. This process is a good example of how UITableView uses other objects to provide data on its behalf. Now that you know how the delegation works for UITableView, it's about time you start implementing this in your own app. Conforming to the UITableViewDataSource and UITableViewDelegate protocol In order to specify the UITableView's delegate and data source, the first thing you need to do is to create an @IBOutlet for your UITableView and connect it to ViewController.swift. Add the following line to your ViewController, above the viewDidLoad method. @IBOutlet var tableView: UITableView! Now, using the same technique as before when designing UITableViewCell, select the UITableView in your Main.storyboard file and use the Connections Inspector to drag a new outlet reference to the UITableView. Make sure you select the tableView property and that's it. You've now hooked up your UITableView to the ViewController code. To make the ViewController code both the data source and the delegate for UITableView, it will have to conform to the UITableViewDataSource and UITableViewDelegate protocols. To do this, you have to add the protocols you want to conform to your class definition. The protocols are added, separated by commas, after the superclass. When you add the protocols to the ViewController, it should look like this: class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate { // class body
 } Once you have done this, you will have an error in your code. That's because even though your class definition claim to implement these protocols, you haven't actually implemented the required functionality yet. If you look at the errors Xcode is giving you, it becomes clear that there's two methods you must implement. These methods are tableView(_:numberOfRowsInSection:) and tableView(_:cellForRowAt:). So let's fix the errors by adjusting our code a little bit in order to conform to the protocols. This is also a great time to refactor the contacts fetching a little bit. You'll want to access the contacts in multiple places so that the list should become an instance variable. Also, if you're going to create cells anyway, you might as well configure them to display the correct information right away. To do so, perform the following code: var contacts = [CNContact]() // … viewDidLoad // … retrieveContacts func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  The preceding code is what's needed to conform to the UITableViewDataSource protocol. Right below the @IBOutlet of your UITableView, a variable is declared that will hold the list of contacts. The following code snippet was also added to the ViewController: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count }  This method is called by the UITableView to determine how many cells it will have to render. This method just returns the total number of contacts that's in the contacts list. You'll notice that there's a section parameter passed to this method. That's because a UITableView can contain multiple sections. The contacts list only has a single section; if you have data that contains multiple sections, you should also implement the numberOfSections(in:) method. The second method we added was: func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  This method is used to get an appropriate cell for our UITableView to display. This is done by calling dequeueReusableCell(withIdentifier:) on the UITableView instance that's passed to this method. This is because UITableView can reuse cells that are currently off screen. This is a performance optimization that allows UITableView to display vast amounts of data without becoming slow or consuming big chunks of memory. The return type of dequeueReusableCell(withIdentifier:) is UITableViewCell, and our custom outlets are not available on this class. This is why we force cast the result from that method to ContactTableViewCell. Force casting to your own subclass will make sure that the rest of your code has access to your nameLabel and contactImage. Casting objects will convert an object from one class or struct to another. This usually only works correctly when you're casting from a superclass to a subclass like we're doing in our example. Casting can fail, so force casting is dangerous and should only be done if you want your app to crash or consider it a programming error in case the cast fails. We also grab a contact from the contacts array that corresponds to the current row of indexPath. This contact is then used to assign all the correct values to the cell and then the cell is returned. This is all the setup needed to make your UITableView display the cells. Yet, if we build and run our app, it doesn't work! A few more changes will have to be made for it to do so. Currently, the retrieveContacts method does fetch the contacts for your user, but it doesn't update the contacts variable in ViewController. Also, the UITableView won't know that it needs to reload its data unless it's told to. Currently, the last few lines of retrieveContacts will look like this: let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) Let's update these lines to the following code: contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) tableView.reloadData() Now, the result of fetching contacts is assigned to the instance variable that's declared at the top of your ViewController. After doing that, we tell the tableView to reload its data, so it will go through the delegate methods that provide the cell count and cells again. Lastly, the UITableView doesn't know that the ViewControler instance will act as both the dataSource and the delegate. So, you should update the viewDidLoad method to assign the UITableView's delegate and dataSource properties. Add the following lines to the end of the viewDidLoad method: tableView.dataSource = self tableView.delegate = self If you build and run it now, your app works! If you're running it in the simulator or you haven't assigned images to your contacts, you won't see any images. If you'd like to assign some images to the contacts in the simulator, you can drag your images into the simulator to add them to the simulator's photo library. From there, you can add pictures to contacts just as you would on a real device. However, if you have assigned images to some of your contacts you will see their images appear. You can now scroll through all of your contacts, but there seems to be an issue. When you're scrolling down your contacts list, you might suddenly see somebody else's photo next to the name of a contact that has no picture! This is actually a performance optimization. Summary Your contacts app is complete for now. We've already covered a lot of ground on the way to iOS mastery. We started by creating a UIViewController that contains a UITableView. We used Auto Layout to pin the edges of the UITableView to the edges of main view of ViewController. We also explored the Contacts.framework and understood how to set up our app so it can access the user's contact data. Resources for Article:  Further resources on this subject: Offloading work from the UI Thread on Android [article] Why we need Design Patterns? [article] Planning and Structuring Your Test-Driven iOS App [article]
Read more
  • 0
  • 0
  • 41541

article-image-sql-tuning-enhancements-oracle-12c
Packt
13 Dec 2016
13 min read
Save for later

SQL Tuning Enhancements in Oracle 12c

Packt
13 Dec 2016
13 min read
Background Performance Tuning is one of the most critical area of Oracle databases and having a good knowledge on SQL tuning helps DBAs in tuning production databases on a daily basis. Over the years Oracle optimizer has gone through several enhancements and each release presents a best among all optimizer versions. Oracle 12c is no different. Oracle has improved the optimizer and added new features in this release to make it better than previous release. In this article we are going to see some of the explicit new features of Oracle optimizer which helps us in tuning our queries. Objective In this article, Advait Deo and Indira Karnati, authors of the book OCP Upgrade 1Z0-060 Exam guide discusses new features of Oracle 12c optimizer and how it helps in improving the SQL plan. It also discusses some of the limitations of optimizer in previous release and how Oracle has overcome those limitations in this release. Specifically, we are going to discuss about dynamic plan and how it works (For more resources related to this topic, see here.) SQL Tuning Before we go into the details of each of these new features, let us rewind and check what we used to have in Oracle 11g. Behavior in Oracle 11g R1 Whenever an SQL is executed for the first time, an optimizer will generate an execution plan for the SQL based on the statistics available for the different objects used in the plan. If statistics are not available, or if the optimizer thinks that the existing statistics are of low quality, or if we have complex predicates used in the SQL for which the optimizer cannot estimate the cardinality, the optimizer may choose to use dynamic sampling for those tables. So, based on the statistics values, the optimizer generates the plan and executes the SQL. But, there are two problems with this approach: Statistics generated by dynamic sampling may not be of good quality as they are generated in limited time and are based on a limited sample size. But a trade-off is made to minimize the impact and try to approach a higher level of accuracy. The plan generated using this approach may not be accurate, as the estimated cardinality may differ a lot from the actual cardinality. The next time the query executes, it goes for soft parsing and picks the same plan. Behavior in Oracle 11g R2 To overcome these drawbacks, Oracle enhanced the dynamic sampling feature further in Oracle11g Release 2. In the 11.2 release, Oracle will automatically enable dynamic sample when the query is run if statistics are missing, or if the optimizer thinks that current statistics are not up to the mark. The optimizer also decides the level of the dynamic sample, provided the user does not set the non-default value of the OPTIMIZER_DYNAMIC_SAMPLING parameter (default value is 2). So, if this parameter has a default value in Oracle11g R2, the optimizer will decide when to spawn dynamic sampling in a query and at what level to spawn the dynamic sample. Oracle also introduced a new feature in Oracle11g R2 called cardinality feedback. This was in order to further improve the performance of SQLs, which are executed repeatedly and for which the optimizer does not have the correct cardinality, perhaps because of missing statistics, or complex predicate conditions, or because of some other reason. In such cases, cardinality feedback was very useful. The way cardinality feedback works is, during the first execution, the plan for the SQL is generated using the traditional method without using cardinality feedback. However, during the optimization stage of the first execution, the optimizer notes down all the estimates that are of low quality (due to missing statistics, complex predicates, or some other reason) and monitoring is enabled for the cursor that is created. If this monitoring is enabled during the optimization stage, then, at the end of the first execution, some cardinality estimates in the plan are compared with the actual estimates to understand how significant the variation is. If the estimates vary significantly, then the actual estimates for such predicates are stored along with the cursor, and these estimates are used directly for the next execution instead of being discarded and calculated again. So when the query executes the next time, it will be optimized again (hard parse will happen), but this time it will use the actual statistics or predicates that were saved in the first execution, and the optimizer will come up with better plan. But even with these improvements, there are drawbacks: With cardinality feedback, any missing cardinality or correct estimates are available for the next execution only and not for the first execution. So the first execution always go for regression. The dynamic sample improvements (that is, the optimizer deciding whether dynamic sampling should be used and the level of the dynamic sampling) are only applicable to parallel queries. It is not applicable to queries that aren't running in parallel. Dynamic sampling does not include joins and groups by columns. Oracle 12c has provided new improvements, which eliminates the drawbacks of Oracle11g R2. Adaptive execution plans – dynamic plans The Oracle optimizer chooses the best execution plan for a query based on all the information available to it. Sometimes, the optimizer may not have sufficient statistics or good quality statistics available to it, making it difficult to generate optimal plans. In Oracle 12c, the optimizer has been enhanced to adapt a poorly performing execution plan at run time and prevent a poor plan from being chosen on subsequent executions. An adaptive plan can change the execution plan in the current run when the optimizer estimates prove to be wrong. This is made possible by collecting the statistics at critical places in a plan when the query starts executing. A query is internally split into multiple steps, and the optimizer generates multiple sub-plans for every step. Based on the statistics collected at critical points, the optimizer compares the collected statistics with estimated cardinality. If the optimizer finds a deviation in statistics beyond the set threshold, it picks a different sub-plan for those steps. This improves the ability of the query-processing engine to generate better execution plans. What happens in adaptive plan execution? In Oracle12c, the optimizer generates dynamic plans. A dynamic plan is an execution plan that has many built-in sub-plans. A sub-plan is a portion of plan that the optimizer can switch to as an alternative at run time. When the first execution starts, the optimizer observes statistics at various critical stages in the plan. An optimizer makes a final decision about the sub-plan based on observations made during the execution up to this point. Going deeper into the logic for the dynamic plan, the optimizer actually places the statistics collected at various critical stages in the plan. These critical stages are the places in the plan where the optimizer has to join two tables or where the optimizer has to decide upon the optimal degree of parallelism. During the execution of the plan, the statistics collector buffers a portion of the rows. The portion of the plan preceding the statistics collector can have alternative sub-plans, each of which is valid for the subset of possible values returned by the collector. This means that each of the sub-plans has a different threshold value. Based on the data returned by the statistics collector, a sub-plan is chosen which falls in the required threshold. For example, an optimizer can insert a code to collect statistics before joining two tables, during the query plan building phase. It can have multiple sub-plans based on the type of join it can perform between two tables. If the number of rows returned by the statistics collector on the first table is less than the threshold value, then the optimizer might go with the sub-plan containing the nested loop join. But if the number of rows returned by the statistics collector is above the threshold values, then the optimizer might choose the second sub-plan to go with the hash join. After the optimizer chooses a sub-plan, buffering is disabled and the statistics collector stops collecting rows and passes them through instead. On subsequent executions of the same SQL, the optimizer stops buffering and chooses the same plan instead. With dynamic plans, the optimizer adapts to poor plan choices and correct decisions are made at various steps during runtime. Instead of using predetermined execution plans, adaptive plans enable the optimizer to postpone the final plan decision until statement execution time. Consider the following simple query: SELECT a.sales_rep, b.product, sum(a.amt) FROM sales a, product b WHERE a.product_id = b.product_id GROUP BY a.sales_rep, b.product When the query plan was built initially, the optimizer will put the statistics collector before making the join. So it will scan the first table (SALES) and, based on the number of rows returned, it might make a decision to select the correct type of join. The following figure shows the statistics collector being put in at various stages: Enabling adaptive execution plans To enable adaptive execution plans, you need to fulfill the following conditions: optimizer_features_enable should be set to the minimum of 12.1.0.1 optimizer_adapive_reporting_only should be set to FALSE (default) If you set the OPTIMIZER_ADAPTIVE_REPORTING_ONLY parameter to TRUE, the adaptive execution plan feature runs in the reporting-only mode—it collects the information for adaptive optimization, but doesn't actually use this information to change the execution plans. You can find out if the final plan chosen was the default plan by looking at the column IS_RESOLVED_ADAPTIVE_PLAN in the view V$SQL. Join methods and parallel distribution methods are two areas where adaptive plans have been implemented by Oracle12c. Adaptive execution plans and join methods Here is an example that shows how the adaptive execution plan will look. Instead of simulating a new query in the database and checking if the adaptive plan has worked, I used one of the queries in the database that is already using the adaptive plan. You can get many such queries if you check V$SQL with is_resolved_adaptive_plan = 'Y'. The following queries will list all SQLs that are going for adaptive plans. Select sql_id from v$sql where is_resolved_adaptive_plan = 'Y'; While evaluating the plan, the optimizer uses the cardinality of the join to select the superior join method. The statistics collector starts buffering the rows from the first table, and if the number of rows exceeds the threshold value, the optimizer chooses to go for a hash join. But if the rows are less than the threshold value, the optimizer goes for a nested loop join. The following is the resulting plan: SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'dhpn35zupm8ck',cursor_child_no=>0; Plan hash value: 3790265618 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 445 (100)| | | 1 | SORT ORDER BY | | 1 | 73 | 445 (1)| 00:00:01| | 2 | NESTED LOOPS | | 1 | 73 | 444 (0)| 00:00:01| | 3 | NESTED LOOPS | | 151 | 73 | 444 (0)| 00:00:01| |* 4 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 151 | 7701 | 293 (0)| 00:00:01| |* 5 | INDEX FULL SCAN | I_OBJ3 | 1 | | 20 (0)| 00:00:01| |* 6 | INDEX UNIQUE SCAN | I_TYPE2 | 1 | | 0 (0)| | |* 7 | TABLE ACCESS BY INDEX ROWID | TYPE$ | 1 | 22 | 1 (0)| 00:00:01| ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter(SYSDATE@!-"O"."CTIME">.0007) 5 - filter("O"."OID$" IS NOT NULL) 6 - access("O"."OID$"="T"."TVOID") 7 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) Note ----- - this is an adaptive plan If we check this plan, we can see the notes section, and it tells us that this is an adaptive plan. It tells us that the optimizer must have started with some default plan based on the statistics in the tables and indexes, and during run time execution it changed the join method for a sub-plan. You can actually check which step optimizer has changed and at what point it has collected the statistics. You can display this using the new format of DBMS_XPLAN.DISPLAY_CURSOR – format => 'adaptive', resulting in the following: DEO>SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'dhpn35zupm8ck',cursor_child_no=>0,format=>'adaptive')); Plan hash value: 3790265618 ------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | | | 445 (100)| | | 1 | SORT ORDER BY | | 1 | 73 | 445 (1)| 00:00:01 | |- * 2 | HASH JOIN | | 1 | 73 | 444 (0)| 00:00:01 | | 3 | NESTED LOOPS | | 1 | 73 | 444 (0)| 00:00:01 | | 4 | NESTED LOOPS | | 151 | 73 | 444 (0)| 00:00:01 | |- 5 | STATISTICS COLLECTOR | | | | | | | * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 151 | 7701 | 293 (0)| 00:00:01 | | * 7 | INDEX FULL SCAN | I_OBJ3 | 1 | | 20 (0)| 00:00:01 | | * 8 | INDEX UNIQUE SCAN | I_TYPE2 | 1 | | 0 (0)| | | * 9 | TABLE ACCESS BY INDEX ROWID | TYPE$ | 1 | 22 | 1 (0)| 00:00:01 | |- * 10 | TABLE ACCESS FULL | TYPE$ | 1 | 22 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("O"."OID$"="T"."TVOID") 6 - filter(SYSDATE@!-"O"."CTIME">.0007) 7 - filter("O"."OID$" IS NOT NULL) 8 - access("O"."OID$"="T"."TVOID") 9 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) 10 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) Note ----- - this is an adaptive plan (rows marked '-' are inactive) In this output, you can see that it has given three extra steps. Steps 2, 5, and 10 are extra. But these steps were present in the original plan when the query started. Initially, the optimizer generated a plan with a hash join on the outer tables. During runtime, the optimizer started collecting rows returned from OBJ$ table (Step 6), as we can see the STATISTICS COLLECTOR at step 5. Once the rows are buffered, the optimizer came to know that the number of rows returned by the OBJ$ table are less than the threshold and so it can go for a nested loop join instead of a hash join. The rows indicated by - in the beginning belong to the original plan, and they are removed from the final plan. Instead of those records, we have three new steps added—Steps 3, 8, and 9. Step 10 of the full table scan on the TYPE$ table is changed to an index unique scan of I_TYPE2, followed by the table accessed by index rowed at Step 9. Adaptive plans and parallel distribution methods Adaptive plans are also useful in adapting from bad distributing methods when running the SQL in parallel. Parallel execution often requires data redistribution to perform parallel sorts, joins, and aggregates. The database can choose from among multiple data distribution methods to perform these options. The number of rows to be distributed determines the data distribution method, along with the number of parallel server processes. If many parallel server processes distribute only a few rows, the database chooses a broadcast distribution method and sends the entire result set to all the parallel server processes. On the other hand, if a few processes distribute many rows, the database distributes the rows equally among the parallel server processes by choosing a "hash" distribution method. In adaptive plans, the optimizer does not commit to a specific broadcast method. Instead, the optimizer starts with an adaptive parallel data distribution technique called hybrid data distribution. It places a statistics collector to buffer rows returned by the table. Based on the number of rows returned, the optimizer decides the distribution method. If the rows returned by the result are less than the threshold, the data distribution method switches to broadcast distribution. If the rows returned by the table are more than the threshold, the data distribution method switches to hash distribution. Summary In this article we learned the explicit new features of Oracle optimizer which helps us in tuning our queries. Resources for Article: Further resources on this subject: Oracle Essbase System 9 Components [article] Oracle E-Business Suite: Adjusting Items in Inventory and Classifying Items [article] Oracle Business Intelligence : Getting Business Information from Data [article]
Read more
  • 0
  • 0
  • 5035

article-image-editor-tool-tutorial
Ahmed Elgoni
13 Dec 2016
6 min read
Save for later

An Editor Tool Tutorial

Ahmed Elgoni
13 Dec 2016
6 min read
Often, when making games, we use different programs to author the various content that goes into our game. When it’s time to stick all the pieces together, things have a habit of catching fire/spiders. Avoiding flaming arachnids is usually a good idea, so in this tutorial, we’re going to learn how to do just that. Note that for this tutorial, I’m assuming that you have some experience writing gameplay code in Unity and want to learn the ways to improve your workflow. We’re going to write a small editor tool to turn a text file representation of a level, which we’re pretending was authored in an external-level editing program, into an actual level. We’re going to turn this: Into this: Protip: Before writing any tool, it’s important that you weigh the pros and cons of writing that tool. Many people make the mistake of taking a long time to write a complicated tool, and what ends up happening is that the tool takes longer to write than it would have taken to do the tool’s job manually! Doing stuff manually isn’t fun, but remember, the end goal is to save time, not to show off your automation skills. First up, let’s understand what our level file is actually telling us. Take a look at the first image again. Every dot is a space character and the two letter words represent different kinds of tiles. From our imaginary-level design program, we know that the keywords are as follows: c1 = cloud_1 bo = bonus_tile gr = grass_tile wa = water_tile We need some way to represent this mapping of a keyword to an asset in Unity. To do this, we’re going to use scriptable objects. What is a scriptable object? From the Unity documentation: “ScriptableObject is a class that allows you to store large quantities of shared data independent from script instances.” They’re most commonly used as data containers for things such as item descriptions, dialogue trees or, in our case, a description of our keyword mapping. Our scriptable object will look like this: Take a look at the source code behind this here. Let’s go through some of the more outlandish lines of code: [System.Serializable] This tells Unity that the struct should be serialized. For our purposes, it means that the struct will be visible in the inspector and also retain its values after we enter and exit play mode. [CreateAssetMenu(fileName = "New Tileset", menuName = "Tileset/New", order = 1)] This is a nifty attribute that Unity provides for us, which lets us create an instance of our scriptable object from the assets menu. The attribute arguments are described here. In action, it looks like this: You can also access this menu by right clicking in the project view. Next up, let’s look at the actual editor window. Here’s what we want to achieve: It should take a map file, like the one at the start of this tutorial, and a definition of what the keywords in that file map to (the scriptable object we’ve just written), and use that to generate a map where each tile is TileWidth wide and TileHeight tall. The source code for the TileToolWIndow is available here. Once again, let’s take a look at the stranger lines of code: using UnityEditor; public class TileToolWindow : EditorWindow To make editor tools, we need to include the editor namespace. We’re also inheriting the EditorWindow class instead of the usual Monobehaviour. Protip: When you have using UnityEditor; in a script, your game will run fine in the editor, but will refuse to compile to a standalone. The reason for this is that the UnityEditor namespace isn’t included in compiled builds, so your scripts will be referencing to a namespace that doesn’t exist anymore. To get around this, you can put your code in a special folder called Editor, which you can create anywhere in your Assets folder. What Unity does is that it ignores any scripts in this folder, or subsequent subfolders, when it’s time to compile code. You can have multiple Editor folders, and they can be inside of other folders. You can find out more about special folders here. [MenuItem("Tools/Tile Tool")] public static void InitializeWindow() { TileToolWindow window = EditorWindow.GetWindow(typeof(TileToolWindow)) as TileToolWindow; window.titleContent.text = "Tile Tool"; window.titleContent.tooltip = "This tool will instantiate a map based off of a text file"; window.Show(); } The [MenuItem("Path/To/Option")] attribute will create a menu item in the toolbar. You can only use this attribute with static functions. It looks like this: In the InitializeWindow function, we create a window and change the window title and tooltip. Finally, we show the window. Something to look out for is Editor.GetWindow, which returns an EditorWindow. We have to cast it to our derived class type if we want to be able to use any extended functions or fields. private void OnGUI() We use OnGUI just like we would with a MonoBehaviour. The only difference is that we can now make calls to the EditorGUI and EditorGUILayout classes. These contain functions that allow us to create controls that look like native Unity editor UI. A lot of the EditorGUI functions require some form of casting. For example: EditorGUILayout.ObjectField("Map File", mapFile, typeof(TextAsset), false) as TextAsset; Remember that if you’re getting compilation errors, it’s likely that you’re forgetting to cast to the correct variable type. The last strange lines are: Undo.RegisterCreatedObjectUndo(folderObject.gameObject, "Create Folder Object"); And: GameObject instantiatedPrefab = Instantiate(currentPrefab, new Vector3(j * tileWidth, i * tileHeight, 0), Quaternion.identity, folderObject) as GameObject; Undo.RegisterCreatedObjectUndo(instantiatedPrefab, "Create Tile"); The Undo class does exactly what it says on the tin. It’s always a good idea to allow your editor tool operations to be undone, especially if they follow a complicated set of steps! The rest of the code is relatively straightforward. If you want to take a look at the complete project, you can find it on GitHub. Thanks for reading! About the author Ahmed Elgoni is a wizard who is currently masquerading as a video game programmer. When he's not doing client work, he's probably trying to turn a tech demo into a cohesive game. One day, he'll succeed! You can find him on twitter @stray_train.
Read more
  • 0
  • 0
  • 1642
article-image-getting-started-react-and-bootstrap
Packt
13 Dec 2016
18 min read
Save for later

Getting Started with React and Bootstrap

Packt
13 Dec 2016
18 min read
In this article by Mehul Bhat and Harmeet Singh, the authors of the book Learning Web Development with React and Bootstrap, you will learn to build two simple real-time examples: Hello World! with ReactJS A simple static form application with React and Bootstrap There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and there is a lot of new theory to learn. This article will introduce you to ReactJS and Bootstrap, which you will likely come across as you learn about modern web application development. It will then take you through the theory behind the Virtual DOM and managing the state to create interactive, reusable and stateful components ourselves and then push it to a Redux architecture. You will have to learn very carefully about the concept of props and state management and where it belongs. If you want to learn code then you have to write code, in whichever way you feel comfortable. Try to create small components/code samples, which will give you more clarity/understanding of any technology. Now, let's see how this article is going to make your life easier when it comes to Bootstrap and ReactJS. Facebook has really changed the way we think about frontend UI development with the introduction of React. One of the main advantages of this component-based approach is that it is easy to understand, as the view is just a function of the properties and state. We're going to cover the following topics: Setting up the environment ReactJS setup Bootstrap setup Why Bootstrap? Static form example with React and Bootstrap (For more resources related to this topic, see here.) ReactJS React (sometimes called React.js or ReactJS) is an open-source JavaScript library that provides a view for data rendered as HTML. Components have been used typically to render React views that contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow, and explicit mutation. It is very methodical in updating the HTML document when the data changes; and provides a clean separation of components on a modern single-page application. From below example, we will have clear idea on normal HTML encapsulation and ReactJS custom HTML tag. JavaScript code: <section> <h2>Add your Ticket</h2> </section> <script> var root = document.querySelector('section').createShadowRoot(); root.innerHTML = '<style>h2{ color: red; }</style>' + '<h2>Hello World!</h2>'; </script> ReactJS code: var sectionStyle = { color: 'red' }; var AddTicket = React.createClass({ render: function() { return (<section><h2 style={sectionStyle}> Hello World!</h2></section>)} }) ReactDOM.render(<AddTicket/>, mountNode); As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner. The React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only the V in MVC, it has stateful components (stateful components remembers everything within this.state). It handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC does. Let's look at React's component life cycle and its different levels. Observe the following screenshot: React isn't an MVC framework; it's a library for building a composable user interface and reusable components. React used at Facebook in production and https://www.instagram.com/ is entirely built on React. Setting up the environment When we start to make an application with ReactJS, we need to do some setup, which just involves an HTML page and includes a few files. First, we create a directory (folder) called Chapter 1. Open it up in any code editor of your choice. Create a new file called index.html directly inside it and add the following HTML5 boilerplate code: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html> This is a standard HTML page that we can update once we have included the React and Bootstrap libraries. Now we need to create couple of folders inside the Chapter 1 folder named images, css, and js (JavaScript) to make your application manageable. Once you have completed the folder structure it will look like this: Installing ReactJS and Bootstrap Once we have finished creating the folder structure, we need to install both of our frameworks, ReactJS and Bootstrap. It's as simple as including JavaScript and CSS files in your page. We can do this via a content delivery network (CDN), such as Google or Microsoft, but we are going to fetch the files manually in our application so we don't have to be dependent on the Internet while working offline. Installing React First, we have to go to this URL https://facebook.github.io/react/ and hit the download button. This will give you a ZIP file of the latest version of ReactJS that includes ReactJS library files and some sample code for ReactJS. For now, we will only need two files in our application: react.min.js and react-dom.min.js from the build directory of the extracted folder. Here are a few steps we need to follow: Copy react.min.js and react-dom.min.js to your project directory, the Chapter 1/js folder, and open up your index.html file in your editor. Now you just need to add the following script in your page's head tag section: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> Now we need to include the compiler in our project to build the code because right now we are using tools such as npm. We will download the file from the following CDN path, https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js, or you can give the CDN path directly. The Head tag section will look this: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> <script type="text/js" src="js/browser.min.js"></script> Here is what the final structure of your js folder will look like: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a fast and easy way to develop a powerful mobile first user interface. The Bootstrap grid system allows you to create responsive 12-column grids, layouts, and components. It includes predefined classes for easy layout options (fixed-width and full width). Bootstrap has a dozen pre-styled reusable components and custom jQuery plugins, such as button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons, and much more. Installing Bootstrap Now, we need to install Bootstrap. Visit http://getbootstrap.com/getting-started/#download and hit on the Download Bootstrap button. This includes the compiled and minified version of css and js for our app; we just need the CSS bootstrap.min.css and fonts folder. This style sheet will provide you with the look and feel of all of the components, and is responsive layout structure for our application. Previous versions of Bootstrap included icons as images but, in version 3, icons have been replaced as fonts. We can also customize the Bootstrap CSS style sheet as per the component used in your application: Extract the ZIP folder and copy the Bootstrap CSS from the css folder to your project folder CSS. Now copy the fonts folder of Bootstrap into your project root directory. Open your index.html in your editor and add this link tag in your head section: <link rel="stylesheet" href="css/bootstrap.min.css"> That's it. Now we can open up index.html again, but this time in your browser, to see what we are working with. Here is the code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> </body> </html> Using React So now we've got the ReactJS and Bootstrap style sheet and in there we've initialized our app. Now let's start to write our first Hello World app using reactDOM.render(). The first argument of the ReactDOM.render method is the component we want to render and the second is the DOM node to which it should mount (append) to: ReactDOM.render( ReactElement element, DOMElement container, [function callback]) In order to translate it to vanilla JavaScript, we use wraps in our React code, <script type"text/babel">, tag that actually performs the transformation in the browser. Let's start out by putting one div tag in our body tag: <div id="hello"></div> Now, add the script tag with the React code: <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> Let's open the HTML page in your browser. If you see Hello, world! in your browser then we are on good track. In the preceding screenshot, you can see it shows the Hello, world! in your browser. That's great. We have successfully completed our setup and built our first Hello World app. Here is the full code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <div id="hello"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> </body> </html> Static form with React and Bootstrap We have completed our first Hello World app with React and Bootstrap and everything looks good and as expected. Now it's time do more and create one static login form, applying the Bootstrap look and feel to it. Bootstrap is a great way to make you app responsive grid system for different mobile devices and apply the fundamental styles on HTML elements with the inclusion of a few classes and div's. The responsive grid system is an easy, flexible, and quick way to make your web application responsive and mobile first that appropriately scales up to 12 columns per device and viewport size. First, let's start to make an HTML structure to follow the Bootstrap grid system. Create a div and add a className .container for (fixed width) and .container-fluid for (full width). Use the className attribute instead of using class: <div className="container-fluid"></div> As we know, class and for are discouraged as XML attribute names. Moreover, these are reserved words in many JavaScript libraries so, to have clear difference and identical understanding, instead of using class and for, we can use className and htmlFor create a div and adding the className row. The row must be placed within .container-fluid: <div className="container-fluid"> <div className="row"></div> </div> Now create columns that must be immediate children of a row: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"></div> </div> </div> .row and .col-xs-4 are predefined classes that available for quickly making grid layouts. Add the h1 tag for the title of the page: <div className="container-fluid"> <div className="row"> <div className="col-sm-6"> <h1>Login Form</h1> </div> </div> </div> Grid columns are created by the given specified number of col-sm-* of 12 available columns. For example, if we are using a four column layout, we need to specify to col-sm-3 lead-in equal columns. Col-sm-* Small devices Col-md-* Medium devices Col-lg-* Large devices We are using the col-sm-* prefix to resize our columns for small devices. Inside the columns, we need to wrap our form elements label and input tags into a div tag with the form-group class: <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> Forget the style of Bootstrap; we need to add the form-control class in our input elements. If we need extra padding in our label tag then we can add the control-label class on the label. Let's quickly add the rest of the elements. I am going to add a password and submit button. In previous versions of Bootstrap, form elements were usually wrapped in an element with the form-actions class. However, in Bootstrap 3, we just need to use the same form-group instead of form-actions. Here is our complete HTML code: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> Now create one object inside the var loginFormHTML script tag and assign this HTML them: Var loginFormHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> We will pass this object in the React.DOM()method instead of directly passing the HTML: ReactDOM.render(LoginformHTML,document.getElementById('hello')); Our form is ready. Now let's see how it looks in the browser: The compiler is unable to parse our HTML because we have not enclosed one of the div tags properly. You can see in our HTML that we have not closed the wrapper container-fluid at the end. Now close the wrapper tag at the end and open the file again in your browser. Note: Whenever you hand-code (write) your HTML code, please double check your "Start Tag" and "End Tag". It should be written/closed properly, otherwise it will break your UI/frontend look and feel. Here is the HTML after closing the div tag: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="js/browser.min.js"></script> </head> <body> <!-- Add your site or application content here --> <div id="loginForm"></div> <script type="text/babel"> var LoginformHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form-control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> ReactDOM.render(LoginformHTML,document.getElementById('loginForm'); </script> </body> </html> Now, you can check your page on browser and you will be able to see the form with below shown look and feel. Now it's working fine and looks good. Bootstrap also provides two additional classes to make your elements smaller and larger: input-lg and input-sm. You can also check the responsive behavior by resizing the browser. That's look great. Our small static login form application is ready with responsive behavior. Some of the benefits are: Rendering your component is very easy Reading component's code would be very easy with help of JSX JSX will also help you to check your layout as well as components plug-in with each other You can test your code easily and it also allows other tools integration for enhancement As we know, React is view layer, you can also use it with other JavaScript frameworks Summary Our simple static login form application and Hello World examples are looking great and working exactly how they should, so let's recap what we've learned in the this article. To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. We also looked at how the React application is initialized and started building our first form application. The Hello World app and form application which we have created demonstrates some of React's and Bootstrap's basic features such as the following: ReactDOM Render Browserify Bootstrap With Bootstrap, we worked towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 13983

article-image-tableau-data-extract-best-practices
Packt
12 Dec 2016
11 min read
Save for later

Tableau Data Extract Best Practices

Packt
12 Dec 2016
11 min read
In this article by Jenny Zhang, author of the book Tableau 10.0 Best Practices, you will learn the Best Practices about Tableau Data Extract. We will look into different ways of creating Tableau data extracts and technical details of how a Tableau data extract works. We will learn on how to create extract with large volume of data efficiently, and then upload and manage Tableau data extract in Tableau online. We will also take a look at refresh Tableau data extract, which is useful to keep your data up to date automatically. Finally, we will take a look using Tableau web connector to create data extract. (For more resources related to this topic, see here.) Different ways of creating Tableau data extracts Tableau provides a few ways to create extracts. Direct connect to original data sources Creating an extract by connecting to the original data source (Databases/Salesforce/Google Analytics and so on) will maintain the connection to the original data source. You can right click the extract to edit the extract and refresh the extract from the original data source. Duplicate of an extract If you create a duplicate of the extract by right click the data extract and duplicate, it will create a new .tde file and still maintain the connection to the original data source. If you refresh the duplicated data extract, it will not refresh the original data extract that you created the duplicate from. Connect to a Tableau Extract File If you create a data extract by connecting to a Tableau extract file (.tde), you will not have that connection to the original data source that the extract is created from since you are just connecting to a local .tde file. You cannot edit or refresh the data from the original data source. Duplicate this extract with connection to the local .tde file will NOT create a new .tde file. The duplication will still point to the same local .tde file. You can right click – Extract Data to create an extract out of an extract. But we do not normally do that. Technical details of how a Tableau data extract works Tableau data extract’s design principle A Tableau extract (.tde) file is a compressed snapshot of data extracted from a large variety of original data sources (excel, databases, Salesforce, NoSQL and so on). It is stored on disk and loaded into memory as required to create a Tableau Viz. There are two design principles of the Tableau extract make it ideal for data analytics. The first principle is Tableau extract is a columnar store. The columnar databases store column values rather than row values. The benefit is that the input/output time required to access/aggregate the values in a column is significantly reduced. That is why Tableau extract is great for data analytics. The second principle is how a Tableau extract is structured to make sure it makes best use of your computer’s memory. This will impact how it is loaded into memory and used by Tableau. To better understand this principle, we need to understand how Tableau extract is created and used as the data source to create visualization. When Tableau creates data extract, it defines the structure of the .tde file and creates separate files for each column in the original data source. When Tableau retrieves data from the original data source, it sorts, compresses and adds the values for each column to their own file. After that, individual column files are combined with metadata to form a single file with as many individual memory-mapped files as there are the columns in the original data source. Because a Tableau data extract file is a memory-mapped file, when Tableau requests data from a .tde file, the data is loaded directly into the memory by the operating system. Tableau does not have to open, process or decompress the file. If needed, the operating system continues to move data in and out of RAM to insure that all of the requested data is made available to Tableau. It means that Tableau can query data that is bigger than the RAM on the computer. Benefits of using Tableau data extract Following are the seven main benefits of using Tableau data extract Performance: Using Tableau data extract can increase performance when the underlying data source is slow. It can also speed up CustomSQL. Reduce load: Using Tableau data extract instead of a live connection to databases reduces the load on the database that can result from heavy traffic. Portability: Tableau data extract can be bundled with the visualizations in a packaged workbook for sharing with others. Pre-aggregation: When creating extract, you can choose to aggregate your data for certain dimensions. An aggregated extract has smaller size and contains only aggregated data. Accessing the values of aggregations in a visualization is very fast since all of the work to derive the values has been done. You can choose the level of aggregation. For example, you can choose to aggregate your measures to month, quarter, or year. Materialize calculated fields: When you choose to optimize the extract, all of the calculated fields that have been defined are converted to static values upon the next full refresh. They become additional data fields that can be accessed and aggregated as quickly as any other fields in the extract. The improvement on performance can be significant especially on string calculations since string calculations are much slower compared to numeric or date calculations. Publish to Tableau Public and Tableau Online: Tableau Public only supports Tableau extract files. Though Tableau Online can connect to some cloud based data sources, Tableau data extract is most common used. Support for certain function not available when using live connection: Certain function such as count distinct is only available when using Tableau data extract. How to create extract with large volume of data efficiently Load very large Excel file to Tableau If you have an Excel file with lots of data and lots of formulas, it could take a long time to load into Tableau. The best practice is to save the Excel as a .csv file and remove all the formulas. Aggregate the values to higher dimension If you do not need the values down to the dimension of what it is in the underlying data source, aggregate to a higher dimension will significantly reduce the extract size and improve performance. Use Data Source Filter Add a data source filter by right click the data source and then choose to Edit Data Source Filter to remove the data you do not need before creating the extract. Hide Unused Fields Hide unused fields before creating a data extract can speed up extract creation and also save storage space. Upload and manage Tableau data extract in Tableau online Create Workbook just for extracts One way to create extracts is to create them in different workbooks. The advantage is that you can create extracts on the fly when you need them. But the disadvantage is that once you created many extracts, it is very difficult to manage them. You can hardly remember which dashboard has which extracts. A better solution is to use one workbook just to create data extracts and then upload the extracts to Tableau online. When you need to create visualizations, you can use the extracts in Tableau online. If you want to manage the extracts further, you can use different workbooks for different types of data sources. For example, you can use one workbook for excel files, one workbook for local databases, one workbook for web based data and so on. Upload data extracts to default project The default project in Tableau online is a good place to store your data extracts. The reason is that the default project cannot be deleted. Another benefit is that when you use command line to refresh the data extracts, you do not need to specify project name if they are in the default project. Make sure Tableau online/server has enough space In Tableau Online/Server, it’s important to make sure that the backgrounder has enough disk space to store existing Tableau data extracts as well as refresh them and create new ones. A good rule of thumb is the size of the disk available to the backgrounder should be two to three times the size of the data extracts that are expected to be stored on it. Refresh Tableau data extract Local refresh of the published extract: Download a Local Copy of the Data source from Tableau Online. Go to Data Sources tab Click on the name of the extract you want to download Click download Refresh the Local Copy. Open the extract file in Tableau Desktop Right click on the data source in, and choose Extract- refresh Publish the refreshed Extract to Tableau Online. Right lick the extract and click Publish to server You will be asked if you wish to overwrite a file with the same name and click yes NOTE 1 If you need to make changes to any metadata, please do it before publishing to the server. NOTE 2 If you use the data extract in Tableau Online to create visualizations for multiple workbooks (which I believe you do since that is the benefit of using a shared data source in Tableau Online), please be very careful when making any changes to the calculated fields, groups, or other metadata. If you have other calculations created in the local workbook with the same name as the calculations in the data extract in Tableau Online, the Tableau Online version of the calculation will overwrite what you created in the local workbook. So make sure you have the correct calculations in the data extract that will be published to Tableau Online. Schedule data extract refresh in Tableau Online Only cloud based data sources (eg. Salesforce, Google analytics) can be refreshed using schedule jobs in Tableau online. One option is to use Tableau Desktop command to refresh non-cloud based data source in Tableau Online. Windows scheduler can be used to automate the refresh jobs to update extracts via Tableau Desktop command. Another option is to use the sync application or manually refresh the extracts using Tableau Desktop. NOTE If using command line to refresh the extract, + cannot be used in the data extract name. Tips for Incremental Refreshes Following are the tips for incremental refrences: Incremental extracts retrieve only new records from the underlying data source which reduces the amount of time required to refresh the data extract. If there are no new records to add during an incremental extract, the processes associated with performing an incremental extract still execute. The performance of incremental refresh is decreasing over time. This is because incremental extracts only grow in size, and as a result, the amount of data and areas of memory that must be accessed in order to satisfy requests only grow as well. In addition, larger files are more likely to be fragmented on a disk than smaller ones. When performing an incremental refresh of an extract, records are not replaced. Therefore, using a date field such as “Last Updated” in an incremental refresh could result in duplicate rows in the extract. Incremental refreshes are not possible after an additional file has been appended to a file based data source because the extract has multiple sources at that point. Use Tableau web connector to create data extract What is Tableau web connector? The Tableau Web Data Connector is the API that can be used by people who want to write some code to connect to certain web based data such as a web page. The connectors can be written in java. It seems that these web connectors can only connect to web pages, web services and so on. It can also connect to local files. How to use Tableau web connector? Click on Data | New Data source | Web Data Connector. Is the Tableau web connection live? The data is pulled when the connection is build and Tableau will store the data locally in Tableau extract. You can still refresh the data manually or via schedule jobs. Are there any Tableau web connection available? Here is a list of web connectors around the Tableau community: Alteryx: http://data.theinformationlab.co.uk/alteryx.html Facebook: http://tableaujunkie.com/post/123558558693/facebook-web-data-connector You can check the tableau community for more web connectors Summary In summary, be sure to keep in mind the following best practices for data extracts: Use full fresh when possible. Fully refresh the incrementally refreshed extracts on a regular basis. Publish data extracts to Tableau Online/Server to avoid duplicates. Hide unused fields/ use filter before creating extracts to improve performance and save storage space. Make sure there is enough continuous disk space for the largest extract file. A good way is to use SSD drivers. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Introduction to Practical Business Intelligence [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 16226
Modal Close icon
Modal Close icon