Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-installing-neutron
Packt
04 Nov 2015
15 min read
Save for later

Installing Neutron

Packt
04 Nov 2015
15 min read
We will learn about OpenStack networking in this article by James Denton, who is the author of the book Learning OpenStack Networking (Neutron) - Second Edition. OpenStack Networking, also known as Neutron, provides a network infrastructure as-a-service platform to users of the cloud. In this article, I will guide you through the installation of Neutron networking services on top of the OpenStack environment. Components to be installed include: Neutron API server Modular Layer 2 (ML2) plugin By the end of this article, you will have a basic understanding of the function and operation of various Neutron plugins and agents, as well as a foundation on top of which a virtual switching infrastructure can be built. (For more resources related to this topic, see here.) Basic networking elements in Neutron Neutron constructs the virtual network using elements that are familiar to all system and network administrators, including networks, subnets, ports, routers, load balancers, and more. Using version 2.0 of the core Neutron API, users can build a network foundation composed of the following entities: Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network. Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances. Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure. Cloud operators and users alike can configure network topologies by creating and configuring networks and subnets, and then instruct services such as Nova to attach virtual devices to ports on these networks. Users can create multiple networks, subnets, and ports, but are limited to thresholds defined by per-tenant quotas set by the cloud administrator. Extending functionality with plugins Neutron introduces support for third-party plugins and drivers that extend network functionality and implementation of the Neutron API. Plugins and drivers can be created that use a variety of software- and hardware-based technologies to implement the network built by operators and users. There are two major plugin types within the Neutron architecture: Core plugin Service plugin A core plugin implements the core Neutron API and is responsible for adapting the logical network described by networks, ports, and subnets into something that can be implemented by the L2 agent and IP address management system running on the host. A service plugin provides additional network services such as routing, load balancing, firewalling, and more. The Neutron API provides a consistent experience to the user despite the chosen networking plugin. For more information on interacting with the Neutron API, visit http://developer.openstack.org/api-ref-networking-v2.html. Modular Layer 2 plugin Prior to the inclusion of the Modular Layer 2 (ML2) plugin in the Havana release of OpenStack, Neutron was limited to using a single core plugin at a time. The ML2 plugin replaces two monolithic plugins in its reference implementation: the LinuxBridge plugin and the Open vSwitch plugin. Their respective agents, however, continue to be utilized and can be configured to work with the ML2 plugin. Drivers The ML2 plugin introduced the concept of type drivers and mechanism drivers to separate the types of networks being implemented and the mechanisms for implementing networks of those types. Type drivers An ML2 type driver maintains type-specific network state, validates provider network attributes, and describes network segments using provider attributes. Provider attributes include network interface labels, segmentation IDs, and network types. Supported network types include local, flat, vlan, gre, and vxlan. Mechanism drivers An ML2 mechanism driver is responsible for taking information established by the type driver and ensuring that it is properly implemented. Multiple mechanism drivers can be configured to operate simultaneously, and can be described using three types of models: Agent-based: This includes LinuxBridge, Open vSwitch, and others Controller-based: This includes OpenDaylight, VMWare NSX, and others Top-of-Rack: This includes Cisco Nexus, Arista, Mellanox, and others The LinuxBridge and Open vSwitch ML2 mechanism drivers are used to configure their respective switching technologies within nodes that host instances and network services. The LinuxBridge driver supports local, flat, vlan, and vxlan network types, while the Open vSwitch driver supports all of those as well as the gre network type. The L2 population driver is used to limit the amount of broadcast traffic that is forwarded across the overlay network fabric. Under normal circumstances, unknown unicast, multicast, and broadcast traffic floods out all tunnels to other compute nodes. This behavior can have a negative impact on the overlay network fabric, especially as the number of hosts in the cloud scales out. As an authority on what instances and other network resources exist in the cloud, Neutron can prepopulate forwarding databases on all hosts to avoid a costly learning operation. When ARP proxy is used, Neutron prepopulates the ARP table on all hosts in a similar manner to avoid ARP traffic from being broadcast across the overlay fabric. ML2 architecture The following diagram demonstrates at a high level how the Neutron API service interacts with the various plugins and agents responsible for constructing the virtual and physical network: Figure 3.1 The preceding diagram demonstrates the interaction between the Neutron API, Neutron plugins and drivers, and services such as the L2 and L3 agents. For more information on the Neutron ML2 plugin architecture, refer to the OpenStack Neutron Modular Layer 2 Plugin Deep Dive video from the 2013 OpenStack Summit in Hong Kong available at https://www.youtube.com/watch?v=whmcQ-vHams. Third-party support Third-party vendors such as PLUMGrid and OpenContrail have implemented support for their respective SDN technologies by developing their own monolithic or ML2 plugins that implement the Neutron API and extended network services. Others, including Cisco, Arista, Brocade, Radware, F5, VMWare, and more, have created plugins that allow Neutron to interface with OpenFlow controllers, load balancers, switches, and other network hardware. For a look at some of the commands related to these plugins, refer to Appendix, Additional Neutron Commands. The configuration and use of these plugins is outside the scope of this article. For more information on the available plugins for Neutron, visit http://docs.openstack.org/admin-guide-cloud/content/section_plugin-arch.html. Network namespaces OpenStack was designed with multitenancy in mind and provides users with the ability to create and manage their own compute and network resources. Neutron supports each tenant having multiple private networks, routers, firewalls, load balancers, and other networking resources. It is able to isolate many of those objects through the use of network namespaces. A network namespace is defined as a logical copy of the network stack with its own routes, firewall rules, and network interface devices. When using the open source reference plugins and drivers, every network, router, and load balancer that is created by a user is represented by a network namespace. When network namespaces are enabled, Neutron is able to provide isolated DHCP and routing services to each network. These services allow users to create overlapping networks with other users in other projects and even other networks in the same project. The following naming convention for network namespaces should be observed: DHCP namespace: qdhcp-<network UUID> Router namespace: qrouter-<router UUID> Load Balancer namespace: qlbaas-<load balancer UUID> A qdhcp namespace contains a DHCP service that provides IP addresses to instances using the DHCP protocol. In a reference implementation, dnsmasq is the process that services DHCP requests. The qdhcp namespace has an interface plugged into the virtual switch and is able to communicate with instances and other devices in the same network or subnet. A qdhcp namespace is created for every network where the associated subnet(s) have DHCP enabled. A qrouter namespace represents a virtual router and is responsible for routing traffic to and from instances in the subnets it is connected to. Like the qdhcp namespace, the qrouter namespace is connected to one or more virtual switches depending on the configuration. A qlbaas namespace represents a virtual load balancer and may run a service such as HAProxy that load balances traffic to instances. The qlbaas namespace is connected to a virtual switch and can communicate with instances and other devices in the same network or subnet. The leading q in the name of the network namespaces stands for Quantum, the original name for the OpenStack Networking service. Network namespaces of the types mentioned earlier will only be seen on nodes running the Neutron DHCP, L3, and LBaaS agents, respectively. These services are typically configured only on controllers or dedicated network nodes. The ip netns list command can be used to list available namespaces, and commands can be executed within the namespace using the following syntax: ip netns exec NAMESPACE_NAME <command> Commands that can be executed in the namespace include ip, route, iptables, and more. The output of these commands corresponds to data specific to the namespace they are executed in. For more information on network namespaces, see the man page for ip netns at http://man7.org/linux/man-pages/man8/ip-netns.8.html. Installing and configuring Neutron services In this installation, the various services that make up OpenStack Networking will be installed on the controller node rather than a dedicated networking node. The compute nodes will run L2 agents that interface with the controller node and provide virtual switch connections to instances. Remember that the configuration settings recommended here and online at docs.openstack.org may not be appropriate for production systems. To install the Neutron API server, the DHCP and metadata agents, and the ML2 plugin on the controller, issue the following command: # apt-get install neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-plugin-ml2 neutron-common python-neutronclient On the compute nodes, only the ML2 plugin is required: # apt-get install neutron-plugin-ml2 Creating the Neutron database Using the mysql client, create the Neutron database and associated user. When prompted for the root password, use openstack: # mysql –u root –p Enter the following SQL statements in the MariaDB [(none)] > prompt: CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; quit; Update the [database] section of the Neutron configuration file at /etc/neutron/neutron.conf on all nodes to use the proper MySQL database connection string based on the preceding values rather than the default value: [database] connection = mysql://neutron:neutron@controller01/neutron Configuring the Neutron user, role, and endpoint in Keystone Neutron requires that you create a user, role, and endpoint in Keystone in order to function properly. When executed from the controller node, the following commands will create a user called neutron in Keystone, associate the admin role with the neutron user, and add the neutron user to the service project: # openstack user create neutron --password neutron # openstack role add --project service --user neutron admin Create a service in Keystone that describes the OpenStack Networking service by executing the following command on the controller node: # openstack service create --name neutron --description "OpenStack Networking" network The service create command will result in the following output: Figure 3.2 To create the endpoint, use the following openstack endpoint create command: # openstack endpoint create --publicurl http://controller01:9696 --adminurl http://controller01:9696 --internalurl http://controller01:9696 --region RegionOne network The resulting endpoint is as follows: Figure 3.3 Enabling packet forwarding Before the nodes can properly forward or route traffic for virtual machine instances, there are three kernel parameters that must be configured on all nodes: net.ipv4.ip_forward net.ipv4.conf.all.rp_filter net.ipv4.conf.default.rp_filter The net.ipv4.ip_forward kernel parameter allows the nodes to forward traffic from the instances to the network. The default value is 0 and should be set to 1 to enable IP forwarding. Use the following command on all nodes to implement this change: # sysctl -w "net.ipv4.ip_forward=1" The net.ipv4.conf.default.rp_filter and net.ipv4.conf.all.rp_filter kernel parameters are related to reverse path filtering, a mechanism intended to prevent certain types of denial of service attacks. When enabled, the Linux kernel will examine every packet to ensure that the source address of the packet is routable back through the interface in which it came. Without this validation, a router can be used to forward malicious packets from a sender who has spoofed the source address to prevent the target machine from responding properly. In OpenStack, anti-spoofing rules are implemented by Neutron on each compute node within iptables. Therefore, the preferred configuration for these two rp_filter values is to disable them by setting them to 0. Use the following sysctl commands on all nodes to implement this change: # sysctl -w "net.ipv4.conf.default.rp_filter=0" # sysctl -w "net.ipv4.conf.all.rp_filter=0" Using sysctl –w makes the changes take effect immediately. However, the changes are not persistent across reboots. To make the changes persistent, edit the /etc/sysctl.conf file on all hosts and add the following lines: net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 Load the changes into memory on all nodes with the following sysctl command: # sysctl -p Configuring Neutron to use Keystone The Neutron configuration file found at /etc/neutron/neutron.conf has dozens of settings that can be modified to meet the needs of the OpenStack cloud administrator. A handful of these settings must be changed from their defaults as part of this installation. To specify Keystone as the authentication method for Neutron, update the [DEFAULT] section of the Neutron configuration file on all hosts with the following setting: [DEFAULT] auth_strategy = keystone Neutron must also be configured with the appropriate Keystone authentication settings. The username and password for the neutron user in Keystone were set earlier in this article. Update the [keystone_authtoken] section of the Neutron configuration file on all hosts with the following settings: [keystone_authtoken] auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron Configuring Neutron to use a messaging service Neutron communicates with various OpenStack services on the AMQP messaging bus. Update the [DEFAULT] and [oslo_messaging_rabbit] sections of the Neutron configuration file on all hosts to specify RabbitMQ as the messaging broker: [DEFAULT] rpc_backend = rabbit The RabbitMQ authentication settings should match what was previously configured for the other OpenStack services: [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit Configuring Nova to utilize Neutron networking Before Neutron can be utilized as the network manager for Nova Compute services, the appropriate configuration options must be set in the Nova configuration file located at /etc/nova/nova.conf on all hosts. Start by updating the following sections with information on the Neutron API class and URL: [DEFAULT] network_api_class = nova.network.neutronv2.api.API [neutron] url = http://controller01:9696 Then, update the [neutron] section with the proper Neutron credentials: [neutron] auth_strategy = keystone admin_tenant_name = service admin_username = neutron admin_password = neutron admin_auth_url = http://controller01:35357/v2.0 Nova uses the firewall_driver configuration option to determine how to implement firewalling. As the option is meant for use with the nova-network networking service, it should be set to nova.virt.firewall.NoopFirewallDriver to instruct Nova not to implement firewalling when Neutron is in use: [DEFAULT] firewall_driver = nova.virt.firewall.NoopFirewallDriver The security_group_api configuration option specifies which API Nova should use when working with security groups. For installations using Neutron instead of nova-network, this option should be set to neutron as follows: [DEFAULT] security_group_api = neutron Nova requires additional configuration once a mechanism driver has been determined. Configuring Neutron to notify Nova Neutron must be configured to notify Nova of network topology changes. Update the [DEFAULT] and [nova] sections of the Neutron configuration file on the controller node located at /etc/neutron/neutron.conf with the following settings: [DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller01:8774/v2 [nova] auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova Summary Neutron has seen major internal architectural improvements over the last few releases. These improvements have made developing and implementing network features easier for developers and operators, respectively. Neutron maintains the logical network architecture in its database, and network plugins and agents on each node are responsible for configuring virtual and physical network devices accordingly. With the introduction of the ML2 plugin, developers can spend less time implementing the core Neutron API functionality and more time developing value-added features. Now that OpenStack Networking services have been installed across all nodes in the environment, configuration of a layer 2 networking plugin is all that remains before instances can be created. Resources for Article: Further resources on this subject: Installing OpenStack Swift [article] Securing OpenStack Networking [article] The orchestration service for OpenStack [article]
Read more
  • 0
  • 0
  • 8611

article-image-introduction-couchbase
Packt
04 Nov 2015
20 min read
Save for later

Introduction to Couchbase

Packt
04 Nov 2015
20 min read
In this article by Henry Potsangbam, the author of the book Learning Couchbase, we will learn that Couchbase is a NoSQL nonrelational database management system, which is different from traditional relational database management systems in many significant ways. It is designed for distributed data stores in which there are very large-scale data storage requirements (terabytes and petabytes of data). These types of data storing mechanisms might not require fixed schemas, avoid join operations, and typically scale horizontally. The main feature of Couchbase is that it is schemaless. There is no fixed schema to store data. Also, there is no join between one or more data records or documents. It allows distributed storage and utilizes computing resources, such as CPU and RAM, spanning across the  nodes that are part of the Couchbase cluster. Couchbase databases provide the following benefits: It provides a flexible data model. You don't need to worry about the schema. You can design your schema depending on the needs of your application domain and not by storage demands. It's scalable and can be done very easily. Since it's a distributing system, it can scale out horizontally without too many changes in the application. You can scale out with a few mouse clicks and rebalance it very easily. It provides high availability, since there are multiples servers and data replicated across nodes. (For more resources related to this topic, see here.) The architecture of Couchbase Couchbase clusters consist of multiple nodes. A cluster is a collection of one or more instances of the Couchbase server that are configured as a logical cluster. The following is a Couchbase server architecture diagram:  Couchbase Server Architecture As mentioned earlier, while most of the clusters' technologies work on master-slave relationships, Couchbase works on peer-to-peer node mechanism. This means there is no difference between the nodes in the cluster. The functionality provided by each node is the same. Thus, there is no single point of failure. When there is a failure of one node, another node takes up its responsibility, thus providing high availability. The data manager Any operation performed on the Couchbase database system gets stored in the memory, which acts as a caching layer. By default, every document gets stored in the memory for each read, insert, update, and so on until the memory is full. It's a drop-in replacement for Memcache. However, in order to provide persistency of the record, there is a concept called disk queue. This will flush the record to the disk asynchronously, without impacting the client request. This functionality is provided automatically by the data manager, without any human intervention. Cluster management The cluster manager is responsible for node administration and node monitoring within a cluster. Every node within a Couchbase cluster includes the cluster manager component, data storage, and data manager. It manages data storage and retrieval. It contains the memory cache layer, disk persistence mechanism, and query engine. Couchbase clients use the cluster map provided by the cluster manager to find out which node holds the required data, and then communicates with the data manager on that node to perform database operations. Buckets In RDBMS, we usually encapsulate all of the relevant data for a particular application in a database. Say, for example, we are developing an e-commerce application. We usually create a database named, e-commerce, that will be used as the logical namespace to store records in a table, such as customer or shopping cart details. It's called a bucket in a Couchbase terminology. So, whenever you want to store any document in a Couchbase cluster, you will be creating a bucket as a logical namespace as the first step. Precisely, a bucket is an independent virtual container that groups documents logically in a Couchbase cluster, which is equivalent to a database namespace in RDBMS. It can be accessed by various clients in an application. You can also configure features such as security, replication, and so on per bucket. We usually create one database and consolidate all related tables in that namespace in the RDBMS development. Likewise, in Couchbase too, you will usually create one bucket per application and encapsulate all the documents in it. Now, let me explain this concept in detail, since it's the component that administrators and developers will be working with most of the time. In fact, I used to wonder why it is named "bucket". Perhaps, we can store anything in it as we do in the physical world, hence the name "bucket". In any database system, the main purpose is to store data, and the logical namespace for storing data is called a database. Likewise, in Couchbase, the namespace for storing data is called a bucket. So in brief, it's a data container that stores data related to applications, either in the RAM or in disks. In fact, it helps you partition application data depending on an application's requirements. If you are hosting different types of applications in a cluster, say an e-commerce application and a data warehouse, you can partition them using buckets. You can create two buckets, one for the e-commerce application and another for the data warehouse. As a thumb rule, you create one bucket for each application. In an RDBMS, we store data in the forms of rows in a table, which in turn is encapsulated by a database. In Couchbase, bucket is the equivalence of database, but there is no concept of tables in Couchbase. In Couchbase, all data or records, which are referred to as documents, are stored directly in a bucket. Basically, the lowest namespace for storing document or data in Couchabase is a bucket. Internally, Couchbase arranges to store documents in different storages for different buckets. Information such as runtime statistics is collected and reported by the Couchbase cluster, grouped by the bucket type. It enables you to flush out individual buckets. You can create a separate temporary bucket rather than a regular transaction bucket when you need temporary storage for ad hoc requirements, such as reporting, temporary workspace for application programming, and so on, so that you can flush out the temporary bucket after use. The features or capabilities of a bucket depend on its type, which will be discussed subsequently. Types of buckets Couchbase provides two types of buckets, which are differentiated by the mechanism of its storage and capabilities. The two types are: Memcached Couchbase Memcached As the name suggests, buckets of the Memcached type store documents only in the RAM. This means that documents stored in the Memcache bucket are volatile in nature. Hence, such types of buckets won't survive a system reboot. Documents that are stored in such buckets will be accessible by direct address using the key-value pair mechanism. The bucket is distributed, which means that it is spread across the Couchbase cluster nodes. Since it's volatile in nature, you need to be sure of its use cases before using such types of buckets. You can use this kind of bucket to store data that is required temporarily and for better performance, since all of the data is stored in the memory but doesn't require durability. Suppose you need to display a list of countries in your application, then, instead of always fetching from the disk storage, the best way is to fetch data from the disk, populate it in the Memcached bucket, and use it in your application. In the Memcached bucket, the maximum size of a document allowed is 1 MB. All of the data is stored in the RAM, and if the bucket is running out of memory, the oldest data will be discarded. We can't replicate a Memcached bucket. It's completely compatible with the open source Memcached distributed memory object caching system. If you want to know more about the Memcached technology, you can refer to http://memcached.org/. Couchbase The Couchbase bucket type gives persistence to documents. It is distributed across a cluster of nodes and can configure replication, which is not supported in the Memcached bucket type. It's highly available, since documents are replicated across nodes in a cluster. You can verify the bucket using the web Admin UI as follows: Understanding documents By now, you must have understood the concept of buckets, its working and configuration, and so on. Let's now understand the items that get stored in it. So, what is a document? A document is a piece of information or data that gets stored in a bucket. It's the smallest item that can be stored in a bucket. As a developer, you will always be working on a bucket, in terms of documents. Documents are similar to rows in the RDBMS table schema; but in NoSQL terminologies, it will be referred to as a document. It's a way of thinking and designing data objects. All information and data should get stored as a document as it's represented in a physical document. All NoSQL databases, including Couchbase don't require a fixed schema to store documents or data in a particular bucket. These documents are represented in the form of JSON. For the time being, let's try to understand the document at a basic level. Let me show you how a document in represented in Couchbase for better clarity. You need to install the beer-sample bucket for this, which comes along with the Couchbase software installation. If you did not install it earlier, you can do it from the web console using the Settings button. The document overview The preceding screenshot shows a document, it represents a brewery and its document ID is 21st_amendment_brewery_cafe. Each document can have multiple properties/items along with its values. For example, name is the property and 21st Amendment Brewery Café is the value of the name property. So, what is this document ID? The document ID is a unique identifier that is assigned for each document in a bucket. You need to assign a unique ID whenever a document gets stored in a bucket. It's just like a primary key of a table in RDBMS. Keys and metadata As described earlier, a document key is a unique identifier for a document. The value of a document key can be any string. In addition to the key, documents usually have three more types of metadata, which are provided by the Couchbase server, unless modified by an application developer. They are as follows: rev: This is an internal revision ID meant for internal use by Couchbase. It should not be used in the application. expiration: If you want your document to expire after a certain amount of time, you can set that value here. By default, it is 0, that is, the document never expires. flags: These are numerical values specific to the client library that is updated when the document is stored. Document modeling In order to bring agility to applications that change business processes frequently, demanded by its business environment, being schemaless is a good feature. In this methodology, you don't need to be concerned about structures of data initially while designing application.This means as a developer, you don't need to worry about structures of a database schema, such as tables, or worry about splitting information into various tables, instead, you should focus on application requirement and satisfy business needs. I still recollect various moments related to design domain objects/tables, which I've been through when I was a developer, especially when I just graduated from engineering college and was into developing applications for a corporate company. Whenever I was a part of the discussions for any application requirement, at the back of the mind, I had some of these questions: How does a domain object get stored in the database? What will be the table structures? How will I retrieve the domain objects? Will it be difficult to use ORM such as Hibernate, Ejb, and so on? My point here is that instead of being mentally present in the discussion on requirement gathering and understanding the business requirements in detail, I spent more time mapping business entities in a table format. The reason being that if I did not put forward the technical constraints at that time, it would be difficult to revert about the technical challenges we could face in the data structures design later. Earlier, whenever we talked about application design, we always thought about database design structures, such as converting objects into multiple tables using normalization forms (2NF/3NF), and spent a lot of time mapping database objects to application objects using various ORM tools, such as Hibernate, Ejb, and so on. In document modeling, we will always think in terms of application requirements, that is, data or information flow while designing documents, not in terms of storage. We can simply start our application development using business representation of an entity without much concern about the storage structures. Having covered the various advantages provided by a document-based system, we will discuss in this section how to design such kinds of documents to store in any document-based database system, such as Couchbase. Then, we can effectively design domain objects for coherence with the application requirements. Whenever we model the document's structure, we need to consider two main points, one is to store all information in one document and the second is to break it down into multiple documents. You need to consider these and choose one keeping the application requirement in mind. So, an important factor is to evaluate whether the information contains unrelated data components that are independent and can be broken up into different documents or all components represent a complete domain object that could be accessed together most of the time. If data components in an information are related and will be required most of the time, together in a business logic, consider grouping them as a single logical container so that the application developer won't perceive as separate objects or documents. All of these factors depend on the nature of the application being developed and its use cases. Besides these, you need to think in terms of accessing information, such as atomicity, single unit of access, and so on. You can ask yourself a question such as, "Are we going to create or modify the information as a single unit or not?". We also need to consider concurrency, what will happen when the document is accessed by multiple clients at the same time and so on. After looking at all these considerations that you need to keep in mind while designing a document, you have two options: one is to keep all of the information in a single document, and the other is to have a separate document for every different object type. Couchbase SDK overview We have also discussed some of the guidelines used for designing document-based database system. What if we need to connect and perform operations on the Couchbase cluster in an application? This can be achieved using Couchbase client libraries, which are also collectively known as the Couchbase Software Development Kit (SDK). The Couchbase SDK APIs are language dependent. However, the concept remains the same and is applicable to all languages that are supported by the SDK. Let's now try to understand the Couchbase APIs as a concept without referring to any specific language, and then we will map these concepts to Java APIs in the Java SDK section. Couchbase SDK clients are also known as smart clients since they understand the overall status of the cluster, that is, clustermap, and keep the information of the vBucket and its server nodes updated. There are two types of Couchbase clients, as follows: Smart clients: Such clients can understand the health of the cluster and receive constant updates about the information of the cluster. Each smart client maintains a clustermap that can derive the cluster node where a document is stored using the document ID, for example, Java, .NET, and so on. Memcached-compatible: Such clients are used for applications that would be interacting with the traditional memcached bucket, which is not aware of vBucket. It needs to install Moxi (a memcached proxy) on all clients that require access to the Couchbase memcache bucket, which act as a proxy to convert the API's call to the memcache compatible call. Understanding the write operation in the Couchbase cluster Let's understand how the write operation works in the Couchbase cluster. When a write command is issued using the set operation on the Couchbase cluster, the server immediately responds once the document is written to the memory of that particular node. How do clients know which nodes in the cluster will be responsible for storing the document? You might recall that every operation requires a document ID, using this document ID, the hash algorithm determines the vBucket in which it belongs. Then, this vBucket is used to determine the node that will store the document. All mapping information, vBucket to node, is stored in each of the Couchbase client SDKs, which form the clustermap. Views Whenever we want to extract fields from JSON documents without document ID, we use views. If you want to find a document or fetch information about a document with attributes or fields of a document other than the document ID, a view is the way to go for it. Views are written in the form of MapReduce, which we have discussed earlier, that is, it consists of map and reduce phase. Couchbase implements MapReduce using the JavaScript language. The following diagram shows you how various documents are passed through the View engine to produce an index. The View engine ensures that all documents in the bucket are passed through the map method for processing and subsequently to reduce function to create indexes.   When we write views, the View Engine defines materialized views for JSON documents and then queries across the dataset in the bucket. Couchbase provides a view processor to process the entire documents with map and reduce methods defined by the developer to create views. The views are maintained locally by each node for the documents stored in that particular node. Views are created for documents that are stored on the disk only. A view's life cycle A view has its own life cycle. You need to define, build, and query it, as shown in this diagram:   View life cycle  Initially, you will define the logic of MapReduce and build it on each node for each document that is stored locally. In the build phase, we usually emit those attributes that need to be part of indexes. Views usually work on JSON documents only. If documents are not in the JSON format or the attributes that we emit in the map function are not part of the document, then the document is ignored during the generation of views by the view engine. Finally, views are queried by clients to retrieve and find documents. After the completion of this cycle, you can still change the definition of MapReduce. For that, you need to bring the view to development mode and modify it. Thus, you have the view cycle as shown in the preceding diagram while developing a view.   The preceding code shows a view. A view has predefined syntax. You can't change the method signature. Here, it follows the functional programming syntax. The preceding code shows a map method that accepts two parameters: doc: This represents the entire document meta: This represents the metadata of the document Each map will return some objects in the form of key and value. This is represented by the emit() method. The emit() method returns key and value. However, value will usually be null. Since, we can retrieve a document using the key, it's better to use that instead of using the value field of the emit() method. Custom reduce functions Why do we need custom reduce functions? Sometimes, the built-in reduce function doesn’t meet our requirements, although it will suffice most of the time. Custom reduce functions allow you to create your own reduce function. In such a reduce function, the output of map function goes to the corresponding reduce function group as per the key of the map output and the group level parameter. Couchbase ensures that output from the map will be grouped by key and supplied to reduce. Then it’s the developer's role to define logic in reduce, what to perform on the data such as aggregating, addition, and so on. To handle the incremental MapReduce functionality (that is, updating an existing view), each function must also be able to handle and consume its own output. In an incremental situation, the function must handle both new records and previously computed reductions. The input to the reduce function can be not only raw data from the map phase, but also the output of a previous reduce phase. This is called re-reduce and can be identified by the third argument of reduce(). When the re-reduce argument is false, both the key and value arguments are arrays, the value argument array matches the corresponding element with that of array of key. For example, the key[1] is the key of value[1]. The map to reduce function execution is shown as follows: Map reduce execution in a view N1QL overview So far, you have learned how to fetch documents in two ways: using document ID and views. The third way of retrieving documents is by using N1QL, pronounced as Nickel. Personally, I feel that it is a great move by Couchbase to provided SQL-like syntax, since most engineers and IT professionals are quite familiar with SQL, which is usually part of their formal education. It brings confidence in them and also provides ease of using Couchbase in their applications. Moreover, it provides most database operational activities related to development. N1QL can be used to: Store documents, that is, the INSERT command Fetch documents, that is, the SELECT command Prior to the advent of N1QL, developers needed to perform key-based operations, which was quite complex when it came to retrieving information using views and custom reduce. With the previously available options, developers needed to know the key before performing any operation on the document, which would not be the case all the time. Before N1QL features were incorporated in Couchbase, you could not perform ad hoc queries on documents in a bucket until you created views on it. Moreover, sometimes we need to perform joins or searches in the bucket, which is not possible using the document ID and views. All of these drawbacks are addressed in N1QL. I would rather say that N1QL features as an evolution in the Couchbase history. Understanding the N1QL syntax Most N1QL queries will be in the following format: SELECT [DISTINCT] <expression> FROM <data source> WHERE <expression> GROUP BY <expression> ORDER BY <expression> LIMIT <number> OFFSET <number> The preceding statement is very generic. It tells you the comprehensive options provided by N1QL in one syntax: SELECT * FROM LearningCouchbase This selects the entire document store in the bucket, LearningCouchbase. Here, we have fetched all the documents in the LearningCouchbase bucket. The output of the query is shown here; it is in the JSON document format only. All documents returned by the N1QL query will be in the array values format of the attribute, resultset. Summary You learned how to design a document base data schema and connect using connection polling from a Java base application to Couchbase. You also understood how to retrieve data from it using MapReduce based views, and you understood SQL such as syntax, N1QL to extract documents from the Couchbase database, and bucket and perform high available features with XDCR. It will also enable you to perform a full text search by integrating Elasticsearch plugins. Resources for Article: Further resources on this subject: MAPREDUCE FUNCTIONS [article] PUTTING YOUR DATABASE AT THE HEART OF AZURE SOLUTIONS [article] MOVING SPATIAL DATA FROM ONE FORMAT TO ANOTHER [article]
Read more
  • 0
  • 0
  • 6982

article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 4182

article-image-synchronizing-tests
Packt
04 Nov 2015
9 min read
Save for later

Synchronizing Tests

Packt
04 Nov 2015
9 min read
In this article by Unmesh Gundecha, author of Selenium Testing Tools Cookbook Second Edition, you will cover the following topics: Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions While building automated scripts for a complex web application using Selenium WebDriver, we need to ensure that the test flow is maintained for reliable test automation. When tests are run, the application may not always respond with the same speed. For example, it might take a few seconds for a progress bar to reach 100 percent, a status message to appear, a button to become enabled, and a window or pop-up message to open. You can handle these anticipated timing problems by synchronizing your test to ensure that Selenium WebDriver waits until your application is ready before performing the next step. There are several options that you can use to synchronize your test. In this article, we will see various features of Selenium WebDriver to implement synchronization in tests. (For more resources related to this topic, see here.) Synchronizing a test with an implicit wait The Selenium WebDriver provides an implicit wait for synchronizing tests. When an implicit wait is implemented in tests, if WebDriver cannot find an element in the Document Object Model (DOM), it will wait for a defined amount of time for the element to appear in the DOM. Once the specified wait time is over, it will try searching for the element once again. If the element is not found in specified time, it will throw NoSuchElement exception. In other terms, an implicit wait polls the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object's instance. In this recipe, we will briefly explore the use of an implicit wait; however, it is recommended to avoid or minimize the use of an implicit wait. How to do it... Let's create a test on a demo AJAX-enabled application as follows: @Test public void testWithImplicitWait() { //Go to the Demo AjAX Application WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); //Set the Implicit Wait time Out to 10 Seconds driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); try { //Get link for Page 4 and click on it WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); //Get an element with id page4 and verify it's text WebElement message = driver.findElement(By.id("page4")); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the Timeouts interface for configuring the implicit wait. The Timeouts interface provides an implicitlyWait() method, which accepts the time the driver should wait when searching for an element. In this example, a test will wait for an element to appear in DOM for 10 seconds: driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); Until the end of a test or an implicit wait is set back to 0, every time an element is searched using the findElement() method, the test will wait for 10 seconds for an element to appear. Using implicit wait may slow down tests when an application responds normally, as it will wait for each element appearing in the DOM and increase the overall execution time. Minimize or avoid using an implicit wait. Use Explicit wait, which provides more control when compared with an implicit wait. See also Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with an explicit wait The Selenium WebDriver provides an explicit wait for synchronizing tests, which provides a better way to wait over an implicit wait. Unlike an implicit wait, you can use predefined conditions or custom conditions or wait before proceeding further in the code. The Selenium WebDriver provides WebDriverWait and ExpectedConditions classes for implementing an explicit wait. The ExpectedConditions class provides a set of predefined conditions to wait before proceeding further in the code. The following table shows some common conditions that we frequently come across when automating web browsers supported by the ExpectedConditions class:   Predefined condition    Selenium method An element is visible and enabled elementToBeClickable(By locator) An element is selected elementToBeSelected(WebElement element) Presence of an element presenceOfElementLocated(By locator) Specific text present in an element textToBePresentInElement(By locator, java.lang.String text) Element value textToBePresentInElementValue(By locator, java.lang.String text) Title titleContains(java.lang.String title)  For more conditions, visit http://seleniumhq.github.io/selenium/docs/api/java/index.html. In this recipe, we will explore some of these conditions with the WebDriverWait class. How to do it... Let's implement a test that uses the ExpectedConditions.titleContains() method to implement an explicit wait as follows: @Test public void testExplicitWaitTitleContains() { //Go to the Google Home Page WebDriver driver = new FirefoxDriver(); driver.get("http://www.google.com"); //Enter a term to search and submit WebElement query = driver.findElement(By.name("q")); query.sendKeys("selenium"); query.click(); //Create Wait using WebDriverWait. //This will wait for 10 seconds for timeout before title is updated with search term //If title is updated in specified time limit test will move to the text step //instead of waiting for 10 seconds WebDriverWait wait = new WebDriverWait(driver, 10); wait.until(ExpectedConditions.titleContains("selenium")); //Verify Title assertTrue(driver.getTitle().toLowerCase().startsWith("selenium")); driver.quit(); } How it works... We can define an explicit wait for a set of common conditions using the ExpectedConditions class. First, we need to create an instance of the WebDriverWait class by passing the driver instance and timeout for a wait as follows: WebDriverWait wait = new WebDriverWait(driver, 10); Next, ExpectedCondition is passed to the wait.until() method as follows: wait.until(ExpectedConditions.titleContains("selenium")); The WebDriverWait object will call the ExpectedConditions class object every 500 milliseconds until it returns successfully. See also Synchronizing a test with an implicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with custom-expected conditions With the explicit wait mechanism, we can also build custom-expected conditions along with common conditions using the ExpectedConditions class. This comes in handy when a wait cannot be handled with a common condition supported by the ExpectedConditions class. In this recipe, we will explore how to create a custom condition. How to do it... We will create a test that will create a wait until an element appears on the page using the ExpectedCondition class as follows: @Test public void testExplicitWait() { WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); try { WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the ability to implement the custom ExpectedCondition interface along with the WebDriverWait class for creating a custom-wait condition, as needed by a test. In this example, we created a custom condition, which returns a WebElement object once the inner findElement() method locates the element within a specified timeout as follows: WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ @Override public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); There's more... A custom wait can be created in various ways. In the following section, we will explore some common examples for implementing a custom wait. Waiting for element's attribute value update Based on the events and actions performed, the value of an element's attribute might change at runtime. For example, a disabled textbox gets enabled based on the user's rights. A custom wait can be created on the attribute value of the element. In the following example, the ExpectedCondition waits for a Boolean return value, based on the attribute value of an element: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("userName")).getAttribute("readonly").contains("true"); }}); Waiting for an element's visibility Developers hide or display elements based on the sequence of actions, user rights, and so on. The specific element might exist in the DOM, but are hidden from the user, and when the user performs a certain action, it appears on the page. A custom-wait condition can be created based on the element's visibility as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("page4")).isDisplayed(); }}); Waiting for DOM events The web application may be using a JavaScript framework such as jQuery for AJAX and content manipulation. For example, jQuery is used to load a big JSON file from the server asynchronously on the page. While jQuery is reading and processing this file, a test can check its status using the active attribute. A custom wait can be implemented by executing the JavaScript code and checking the return value as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { JavascriptExecutor js = (JavascriptExecutor) d; return (Boolean)js.executeScript("return jQuery.active == 0"); }}); See also Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Summary In this article, you have learned how the Selenium WebDriver helps in maintaining a reliable automated test. Using the Selenium WebDriver, you also learned how you can synchronize a test using the implicit and the explicit wait methods. You also saw how to synchronize a test with custom-expected conditions. Resources for Article: Further resources on this subject: Javascript Execution With Selenium [article] Learning Selenium Testing Tools With Python [article] Cross-Browser Tests Using Selenium Webdriver [article]
Read more
  • 0
  • 0
  • 1806

article-image-getting-started-tableau-public
Packt
04 Nov 2015
12 min read
Save for later

Getting Started with Tableau Public

Packt
04 Nov 2015
12 min read
In this article by Ashley Ohmann and Matthew Floyd, the authors of Creating Data Stories with tableau Public. Making sense of data is a valued service in today's world. It may be a cliché, but it's true that we are drowning in data and yet, we are thirsting for knowledge. The ability to make sense of data and the skill of using data to tell a compelling story is becoming one of the most valued capabilities in almost every field—business, journalism, retail, manufacturing, medicine, and public service. Tableau Public (for more information, visit www.tableaupublic.com), which is Tableau 's free Cloud-based data visualization client, is a powerfully transformative tool that you can use to create rich, interactive, and compelling data stories. It's a great platform if you wish to explore data through visualization. It enables your consumers to ask and answer questions that are interesting to them. This article is written for people who are new to Tableau Public and would like to learn how to create rich, interactive data visualizations from publicly available data sources that they can easily share with others. Once you publish visualizations and data to Tableau Public, they are accessible to everyone, and they can be viewed and downloaded. A typical Tableau Public data visualization contains public data sets such as sports, politics, public works, crime, census, socioeconomic metrics, and social media sentiment data (you also can create and use your own data). Many of these data sets either are readily available on the Internet, or can accessed via a public records request or search (if they are harder to find, they can be scraped from the Internet). You can now control who can download your visualizations and data sets, which is a feature that was previously available only to the paid subscribers. Tableau Public has a current maximum data set size of 10 million rows and/or 10 GB of data. (For more resources related to this topic, see here.) In this article, we will walk through an introduction to Tableau, which includes the following topics: A discussion on how you can use Tableau Public to tell your data story Examples of organizations that use Tableau Public Downloading and installing the Tableau Public software Logging in to Tableau Public Creating your very own Tableau Public profile Discovering the Tableau Public features and resources Taking a look at the author profiles and galleries on the Tableau website to browse other authors' data visualizations (this is a great way to learn and gather ideas on how to best present our data) An Tableau Public overview Tableau Public allows everyone to tell their data story and create compelling and interactive data visualizations that encourage discovery and learning. Tableau Public is sold at a great price—free! It allows you as a data storyteller to create and publish data visualizations without learning how to code or having special knowledge of web publishing. In fact, you can publish data sets of up to 10 million rows or 10 GB to Tableau Public in a single workbook. Tableau Public is a data discovery tool. It should not be confused with enterprise-grade business intelligence tools, such as Tableau Desktop and Tableau Server, QlikView, and Cognos Insight. Those tools integrate with corporate networks and security protocol as well as server-based data warehouses. Data visualization software is not a new thing. Businesses have used software to generate dashboards and reports for decades. The twist comes with data democracy tools, such as Tableau Public. Journalists and bloggers who would like to augment their reporting of static text and graphics can use these data discovery tools, such as Tableau Public, to create riveting, rich data visualizations, which may comprise one or more charts, graphs, tables, and other objects that may be controlled by the readers to allow for discovery. The people who are active members of the Tableau Public community have a few primary traits in common, they are curious, generous with their knowledge and time, and enjoy conversations that relate data to the world around us. Tableau Public maintains a list of blogs of data visualization experts using Tableau software. In the following screenshot, Tableau Zen Masters, Anya A'hearn of Databrick and Allan Walker, used data on San Francisco bike sharing to show the financial benefits of the Bay Area Bike Share, a city-sponsored 30-minute bike sharing program, as well as a map of both the proposed expansion of the program and how far a person can actually ride a bike in half an hour. This dashboard is featured in the Tableau Public gallery because it relates data to users clearly and concisely. It presents a great public interest story (commuting more efficiently in a notoriously congested city) and then grabs the viewer's attention with maps of current and future offerings. The second dashboard within the analysis is significant as well. The authors described the Geographic Information Systems (GIS) tools that they used to create their innovative maps as well as the methodology that went into the final product so that the users who are new to the tool can learn how to create a similar functionality for their own purposes: Image republished under the terms of fair use, creators: Anya A'hearn and Allan Walker. Source: https://public.tableausoftware.com/views/30Minutes___BayAreaBikeShare/30Minutes___?:embed=y&:loadOrderID=0&:display_count=yes As humans, we relate our experiences to each other in stories, and data points are an important component of stories. They quantify phenomena and, when combined with human actions and emotions, can make them more memorable. When authors create public interest story elements with Tableau Public, readers can interact with the analyses, which creates a highly personal experience and translates into increased participation and decreased abandonment. It's not difficult to embed the Tableau Public visualizations into websites and blogs. It is as easy as copying and pasting JavaScript that Tableau Public renders for you automatically. Using Tableau Public increases accessibility to stories, too. You can view data stories on mobile devices with a web browser and then share it with friends on social media sites such as Twitter and Facebook using Tableau Public's sharing functionality. Stories can be told with the help of text as well as popular and tried-and-true visualization types such as maps, bar charts, lists, heat maps, line charts, and scatterplots. Maps are particularly easier to build in Tableau Public than most other data visualization offerings because Tableau has integrated geocoding (down to the city and postal code) directly into the application. Tableau Public has a built-in date hierarchy that makes it easy for users to drill through time dimensions just by clicking on a button. One of Tableau Software's taglines, Data to the People, is a reflection not only of the ability to distribute analysis sets to thousands of people in one go, but also of the enhanced abilities of nontechnical users to explore their own data easily and derive relevant insights for their own community without having to learn a slew of technical skills. Telling your story with Tableau Public Tableau was originally developed in the Stanford University Computer Science department, where a research project sponsored by the U.S. Department of Defense was launched to study how people can analyze data rapidly. This project merged two branches of computer science, understanding data relationships and computer graphics. This mash-up was discovered to be the best way for people to understand and sometimes digest complex data relationships rapidly and, in effect, to help readers consume data. This project eventually moved from the Stanford campus to the corporate world, and Tableau Software was born. The Tableau usage and adoption has since skyrocketed at the time of writing this book. Tableau is the fastest growing software company in the world and now, Tableau competes directly with the older software manufacturers for data visualization and discovery—Microsoft, IBM, SAS, Qlik, and Tibco, to name a few. Most of these are compared to each other by Gartner in its annual Magic Quadrant. For more information, visit http://www.gartner.com/technology/home.jsp. Tableau Software's flagship program, Tableau Desktop, is commercial software used by many organizations and corporations throughout the world. Tableau Public is the free version of Tableau's offering. It is typically used with nonconfidential data either from the public domain or that which we collected ourselves. This free public offering of Tableau Public is truly unique in the business intelligence and data discovery industry. There is no other software like it—powerful, free, and open to data story authors. There are a few terms in this article that might be new to you. You, as an author, will load data into a workbook, which will be saved by you in the Tableau Public cloud. A visualization is a single graph. It is typically on a worksheet. One or more visualizations are on a dashboard, which is where your users will interact with your data. One of the wonderful features about Tableau Public is that you can load data and visualize it on your own. Traditionally, this has been an activity that was undertaken with the help of programmers at work. With Tableau Public and newer blogging platforms, nonprogrammers can develop data visualization, publish it to the Tableau Public website, and then embed the data visualization on their own website. The basic steps that are required to create a Tableau Public visualization are as follows: Gather your data sources, usually in a spreadsheet or a .csv file. Prepare and format your data to make it usable in Tableau Public. Connect to the data and start building the data visualizations (charts, graphs, and many other objects). Save and publish your data visualization to the Tableau Public website. Embed your data visualization in your web page by using the code that Tableau Public provides. Tableau Public is used by some of the leading news organizations across the world, including The New York Times, The Guardian (UK), National Geographic (US), the Washington Post (US), the Boston Globe (US), La Informacion (Spain), and Época (Brazil). Now, we will discuss installing Tableau Public. Then, we will take a look at how we can find some of these visualizations out there in the wild so that we can learn from others and create our own original visualizations. Installing Tableau Public Let's look at the steps required for the installation of Tableau Public: To download Tableau Public, visit the Tableau Software website at http://public.tableau.com/s/. Enter your e-mail address and click on the Download the App button located at the center of the screen, as shown in following screenshot: The downloaded version of Tableau Public is free, and it is not a limited release or demo version. It is a fully functional version of Tableau Public. Once the download begins, a Thank You screen gives you an option of retrying the download if it does not begin automatically or starts downloading a different version. The version of Tableau Public that gets downloaded automatically is the 64-bit version for Windows. Users of Macs should download the appropriate version for their computers, and users with 32-bit Windows machines should download the 32-bit version. Check your Windows computer system type (32- or 64-bit) by navigating to Start then Computer and right-clicking on the Computer option. Select Properties, and view the System properties. 64-bit systems will be noted as such. 32-bit systems will either state that they are 32-bit ones, or not have any indication of being a 32- or 64-bit system. While the Tableau Public executable file downloads, you can scroll the Thank You page to the lower section to learn more about the new features of Tableau Public 9.0. The speed with which Tableau Public downloads depends on the download speed of your network, and the 109 MB file usually takes a few minutes to download. The TableauPublicDesktop-xbit.msi (where x=32 or 64, depending on which version you selected) is downloaded. Navigate to the .msi file in Windows Explorer or in the browser window and click on Open. Then, click on Run in the Open File - Security Warning dialog box that appears in the following screenshot. The Windows installer starts the Tableau installation process: Once you have opted to Run the application, the next screen prompts you to view the License Agreement and accept its terms: If you wish to read the terms of the license agreement, click on the View License Agreement… button. You can customize the installation if you'd like. Options include the directory in which the files are installed as well as the creation of a desktop icon and a Start Menu shortcut (for Windows machines). If you do not customize the installation, Tableau Public will be installed in the default directory on your computer, and the desktop icon and Start Menu shortcut will be created. Select the checkbox that indicates I have read and accept the terms of this License Agreement, and click on Install. If a User Account Control dialog box appears with the Do you want to allow the following program to install software on this computer? prompt, click on Yes: Tableau Public will be installed on your computer, with the status bar indicating the progress: When Tableau Public has been installed successfully, the home screen opens. Exploring Tableau Public The Tableau Public home screen has several features that allow you to do following operations: Connect to data Open your workbooks Discover the features of Tableau Public Tableau encourages new users to watch the video on this first welcome page. To do so, click on the button named Watch the Getting Started Video. You can start building your first Tableau Public workbook any time. Connecting to data You can connect to the following four different data source types in Tableau Public by clicking on the appropriate format name: Microsoft Excel files Text files with a variety of delimiters Microsoft Access files Odata files Summary In this article, we learned how Tableau Public is commonly used. We also learned how to download and install Tableau Public, explore Tableau Public's features and learn about the Tableau Desktop tool, and discover other authors' data visualizations using the Tableau Galleries and Recommended Authors and Profile Finder function on the Tableau website. Resources for Article: Further resources on this subject: Data Acquisition and Mapping [article] Interacting with Data for Dashboards [article] Moving from Foundational to Advanced Visualizations [article]
Read more
  • 0
  • 0
  • 9700

article-image-setting-citrix-components
Packt
03 Nov 2015
4 min read
Save for later

Setting Up the Citrix Components

Packt
03 Nov 2015
4 min read
In this article by Sunny Jha, the author of the book Mastering XenApp, we are going to implement the Citrix XenApp infrastructure components, which are going to work together to deliver the applications. The components we will be implementing are as follows: Setting up Citrix License Server Setting up Delivery Controller Setting up Director Setting up StoreFront Setting up Studio Once you will complete this article, you will be able to understand how to install the Citrix XenApp infrastructure components for the effective delivery of applications. (For more resources related to this topic, see here.) Setting up the Citrix infrastructure components You must be aware of the fact that Citrix reintroduced Citrix XenApp in the version of Citrix XenApp 7.5 with the new FMA-based architecture, replacing IMA. In this article, we will be setting up different Citrix components so that they can deliver the applications. As this is the proof of concept, I will be setting up almost all the Citrix components on the single Microsoft Windows 2012 R2 machine, where it is recommended that in the production environment, you should keep the Citrix components such as License Server, Delivery Controller, and StoreFront. These need to be installed on the separate servers to avoid the single point of failure and better performance. The components that we will be setting up in this article are: Delivery Controller: This Citrix component will act as broker, and the main function is to assign users to a server, based on their selection of application published. License Server: This will assign the license to the Citrix components as every Citrix product requires license in order to work. Studio: This will act as control panel for Citrix XenApp 7.6 delivery. Inside Citrix, studio administrator makes all the configuration and changes. Director: This component is basically for monitoring and troubleshooting, which is web-based application. StoreFront: This is the frontend of the Citrix infrastructure by which users connect to their application, either via receiver or web based. Installing of Citrix components In order to start the installation, we need the Citrix XenApp 7.6 DVD or ISO image. You can always download, from the Citrix website, all you need to have in the MyCitrix account. Follow these steps: Mount the disc/ISO you have downloaded. When you will double-click on the mounted disc, it will bring up a nice screen where you have to make the selection between XenApp Deliver applications or XenDesktop Deliver application and desktops: Once you have made the selection, it will show you the next option related to the product. Here, we need to select XenApp. Choose Delivery Controller from the options: The next screen will show you the License Agreement. You can go through it and accept the terms and click on Next: As described earlier, this is the proof of concept. We will install all the components on single server, but it is recommended to put each component on different server for better performance. Select all the components and click on Next: The next screen will show you the features that can be installed. As we have already installed the SQL server, we don't have to select the SQL Express, but we will choose Install Windows Remote Assistance. Click on Next: The next screen will show you the firewall ports that needs to be allowed to communicate, and it can be adjusted by Citrix as well. Click on Next: The next screen will show you the summary of your selection. Here, you can review your selection and click on Install to install the components: After you click on Install, it will go through the installation procedure, and once the installation is complete, click on Next. By following these steps, we completed the installation of the Citrix components such as Delivery Controller, Studio, Director, and StoreFront. We also adjusted the firewall ports as per the Citrix XenApp requirement. Summary In this article, you learned about setting up the Citrix infrastructure components and also how to install Citrix Director, License Server, Citrix Studio, and Citrix Director, and Citrix StoreFront. Resources for Article: Further resources on this subject: Getting Started – Understanding Citrix XenDesktop and its Architecture [article] High Availability, Protection, and Recovery using Microsoft Azure [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 8390
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-restservices-finagle-and-finch
Packt
03 Nov 2015
9 min read
Save for later

RESTServices with Finagle and Finch

Packt
03 Nov 2015
9 min read
In this article by Jos Dirksen, the author of RESTful Web Services with Scala, we'll only be talking about Finch. Note, though, that most of the concepts provided by Finch are based on the underlying Finagle ideas. Finch just provides a very nice REST-based set of functions to make working with Finagle very easy and intuitive. (For more resources related to this topic, see here.) Finagle and Finch are two different frameworks that work closely together. Finagle is an RPC framework, created by Twitter, which you can use to easily create different types of services. On the website (https://github.com/twitter/finagle), the team behind Finagle explains it like this: Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle's code is protocol agnostic, simplifying the implementation of new protocols. So, while Finagle provides the plumbing required to create highly scalable services, it doesn't provide direct support for specific protocols. This is where Finch comes in. Finch (https://github.com/finagle/finch) provides an HTTP REST layer on top of Finagle. On their website, you can find a nice quote that summarizes what Finch aims to do: Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable REST APIs. Its mission is to provide the developers simple and robust REST API primitives being as close as possible to the bare metal Finagle API. Your first Finagle and Finch REST service Let's start by building a minimal Finch REST service. The first thing we need to do is to make sure we have the correct dependencies. To use Finch, all you have to do is to add the following dependency to your SBT file: "com.github.finagle" %% "finch-core" % "0.7.0" With this dependency added, we can start coding our very first Finch service. The next code fragment shows a minimal Finch service, which just responds with a Hello, Finch! message: package org.restwithscala.chapter2.gettingstarted import io.finch.route._ import com.twitter.finagle.Httpx object HelloFinch extends App { Httpx.serve(":8080", (Get / "hello" />"Hello, Finch!").toService) println("Press <enter> to exit.") Console.in.read.toChar } When this service receives a GET request on the URL path hello, it will respond with a Hello, Finch! message. Finch does this by creating a service (using the toService function) from a route (more on what a route is will be explained in the next section) and using the Httpx.serve function to host the created service. When you run this example, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.gettingstarted.HelloFinch Jun 26, 2015 9:38:00 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. At this point, we have an HTTP server running on port 8080. When we make a call to http://localhost:8080/hello, this server will respond with the Hello, Finch! message. To test this service, you can make a HTTP requests in Postman like this: If you don't want to use a GUI to make the requests, you can also use the following Curl command: curl'http://localhost:8080/hello' HTTP verb and URL matching An important part of every REST framework is the ability to easily match HTTP verbs and the various path segments of the URL. In this section, we'll look at the tools Finch provides us with. Let's look at the code required to do this (the full source code for this example can be found at https://github.com/josdirksen/rest-with-scala/blob/master/chapter-02/src/main/scala/org/restwithscala/chapter2/steps/FinchStep1.scala): package org.restwithscala.chapter2.steps import com.twitter.finagle.Httpx import io.finch.request._ import io.finch.route._ import io.finch.{Endpoint => _} object FinchStep1 extends App { // handle a single post using a RequestReader valtaskCreateAPI = Post / "tasks" /> ( for { bodyContent<- body } yield s"created task with: $bodyContent") // Use matchers and extractors to determine which route to call // For more examples see the source file. valtaskAPI = Get / "tasks" /> "Get a list of all the tasks" | Get / "tasks" / long /> ( id =>s"Get a single task with id: $id" ) | Put / "tasks" / long /> ( id =>s"Update an existing task with id $id to " ) | Delete / "tasks" / long /> ( id =>s"Delete an existing task with $id" ) // simple server that combines the two routes and creates a val server = Httpx.serve(":8080", (taskAPI :+: taskCreateAPI).toService ) println("Press <enter> to exit.") Console.in.read.toChar server.close() } In this code fragment, we created a number of Router instances that process the requests, which we sent from Postman. Let's start by looking at one of the routes of the taskAPI router: Get / "tasks" / long /> (id =>s"Get a single task with id: $id"). The following table explains the various parts of the route: Part Description Get While writing routers, usually the first thing you do is determine which HTTP verb you want to match. In this case, this route will only match the GET verb. Besides the Get matcher, Finch also provides the following matchers: Post, Patch, Delete, Head, Options, Put, Connect, and Trace. "tasks" The next part of the route is a matcher that matches a URL path segment. In this case, we match the following URL: http://localhost:8080/tasks. Finch will use an implicit conversion to convert this String object to a finch Matcher object. Finch also has two wildcard Matchers: * and **. The * matcher allows any value for a single path segment, and the ** matcher allows any value for multiple path segments. long The next part in the route is called an Extractor. With an extractor, you turn part of the URL into a value, which you can use to create the response (for example, retrieve an object from the database using the extracted ID). The long extractor, as the name implies, converts the matching path segment to a long value. Finch also provides an int, string, and Boolean extractor. long =>B The last part of the route is used to create the response message. Finch provides different ways of creating the response, which we'll show in the other parts of this article. In this case, we need to provide Finch with a function that transforms the long value we extracted, and return a value Finch can convert to a response (more on this later). In this example, we just return a String.  If you've looked closely at the source code, you would have probably noticed that Finch uses custom operators to combine the various parts of a route. Let's look a bit closer at those. With Finch, we get the following operators (also called combinators in Finch terms): / or andThen: With this combinatory, you sequentially combine various matchers and extractors together. Whenever the first part matches, the next one is called. For instance: Get / "path" / long. | or orElse: This combinator allows you to combine two routers (or parts thereof) together as long as they are of the same type. So, we could do (Get | Post) to create a matcher, which matches the GET and POST HTTP verbs. In the code sample, we've also used this to combine all the routes that returned a simple String into the taskAPI router. /> or map: With this combinatory, we pass the request and any extracted values from the path to a function for further processing. The result of the function that is called is returned as the HTTP response. As you'll see in the rest of the article, there are different ways of processing the HTTP request and creating a response. :+:: The final combinator allows you to combine two routers together of different types. In the example, we have two routers. A taskAPI, that returns a simple String, and a taskCreateAPI, which uses a RequestReader (through the body function) to create the response. We can't combine these with | since the result is created using two different approaches, so we use the :+:combinator. We just return simple Strings whenever we get a request. In the next section, we'll look at how you can use RequestReader to convert the incoming HTTP requests to case classes and use those to create a HTTP response. When you run this service, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.steps.FinchStep1 Jun 26, 2015 10:19:11 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. Once the server is started, you can once again use Postman(or any other REST client) to make requests to this service (example requests can be found at https://github.com/josdirksen/rest-with-scala/tree/master/common): And once again, you don't have to use a GUI to make the requests. You can test the service with Curl as follows: # Create task curl 'http://localhost:8080/tasks' -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Update task curl 'http://localhost:8080/tasks/1' -X PUT -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Get all tasks curl'http://localhost:8080/tasks' # Get single task curl'http://localhost:8080/tasks/1' Summary This article only showed a couple of the features Finch provides. But it should give you a good head start toward working with Finch. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [article] Creating a RESTful API [article] Scalability, Limitations, and Effects [article]
Read more
  • 0
  • 0
  • 3675

article-image-working-xamarinandroid
Packt
03 Nov 2015
10 min read
Save for later

Working with Xamarin.Android

Packt
03 Nov 2015
10 min read
In this article written by Matthew Leibowitz, author of the book Xamarin Mobile Development for Android Cookbook, wants us to learn about the Android version that can be used as support for your project. (For more resources related to this topic, see here.) Supporting all Android versions As the Android operating system evolves, many new features are added and older devices are often left behind. How to do it... In order to add the new features of the later versions of Android to the older versions of Android, all we need to do is add a small package: An Android app has three platform versions to be set. The first is the API features that are available to code against. We set this to always be the latest in the Target Framework dropdown of the project options. The next version to set (via Minimum Android version) is the lowest version of the OS that the app can be installed on. When using the support libraries, we can usually target versions down to version 2.3. Lastly, the Target Android version dropdown specifies how the app should behave when installed on a later version of the OS. Typically, this should always be the latest so that the app will always function as the user expects. If we want to add support for the new UI paradigm that uses fragments and action bars, we need to install two of the Android support packages: Create or open a project in Xamarin Studio. Right-click on the project folder in the Solution Explorer list. Select Add and then Add Packages…. In the Add Packages dialog that is displayed, search for Xamarin.Android.Support. Select both Xamarin Support Library v4 and Xamarin Support Library v7 AppCompat. Click on Add Package. There are several support library packages, each adding other types of forward compatibility, but these two are the most commonly used. Once the packages are installed, our activities can inherit from the AppCompatActivity type instead of the usual Activity type: public class MyActivity : AppCompatActivity { } We specify that the activity theme be one of the AppCompat derivatives using the Theme property in the [Activity] attribute: [Activity(..., Theme = "@style/Theme.AppCompat", ...)] If we need to access the ActionBar instance, it is available via the SupportActionBar property on the activity: SupportActionBar.Title = "Xamarin Cookbook"; By simply using the action bar, all the options menu items are added as action items. However, all of them are added under the action bar overflow menu: The XML for action bar items is exactly the same as the options menu: <menu ... >   <item     android_id="@+id/action_refresh"     android_icon="@drawable/ic_action_refresh"     android_title="@string/action_refresh"/> </menu> To get the menu items out of the overflow and onto the actual action bar, we can customize the items to be displayed and how they are displayed: To add action items with images to the actual action bar as well as more complex items, all that is needed is an attribute in the XML, showAsAction: <menu ... >   <item ... app_showAsAction="ifRoom"/> </menu> Sometimes, we may wish to only display the icon initially and then, when the user taps the icon, expand the item to display the action view: <menu ... >   <item ... app_showAsAction="ifRoom|collapseActionView"/> </menu> If we wish to add custom views, such as a search box, to the action bar, we make use of the actionViewClass attribute: <menu ... >   <item ...   app_actionViewClass="android.support.v7.widget.SearchView"/> </menu> If the view is in a layout resource file, we use the actionLayout attribute: <menu ... >   <item ... app_actionLayout="@layout/action_rating"/> </menu> How it works... As Android is developed, new features are added and designs change. We want to always provide the latest features to our users, but some users either haven't upgraded or can't upgrade to the latest version of Android. Xamarin.Android provides three version numbers to specify which types can be used and how they can be used. The target framework version specifies what types are available for consumption as well as what toolset to use during compilation. This should be the latest as we always want to use the latest tools. However, this will make some types and members available to apps even if they aren't actually available on the Android version that the user is using. For example, it will make the ActionBar type available to apps running on Android version 2.3. If the user were to run the app, it would probably crash. In these instances, we can set the minimum Android version to be a version that supports these types and members. But, this will then reduce the number of devices that we can install our app on. This is why we use the support libraries; they allow the types to be used on most versions of Android. Setting the minimum Android version for an app will prevent the app from being installed on devices with earlier versions of the OS. The support libraries By including the Android Support Libraries in our app, we can make use of the new features but still support the old versions. Types from the Android Support Library are available to almost all versions of Android currently in use. The Android Support Libraries provide us with a type that we know we can use everywhere, and then that base type manages the features to ensure that they function as expected. For example, we can use the ActionBar type on most versions of Android because the support library made it available through the AppCompatActivity type. Because the AppCompatActivity type is an adaptive extension for the traditional Activity type, we have to use a different theme. This theme adjusts so that the new look and feel of the UI gets carried all the way back to the old Android versions. When using the AppCompatActivity type, the activity theme must be one of the AppCompat theme variations. There are a few differences in the use when using the support library. With native support for the action bar, the AppCompatActivity type has a property named ActionBar; however, in the support library, the property is named SupportActionBar. This is just a property name change but the functionality is the same. Sometimes ,features have to be added to the existing types that are not in the support libraries. In these cases, static methods are provided. The native support for custom views in menu items includes a method named SetActionView(): menuItem.SetActionView(someView); This method does not exist on the IMenuItem type for the older versions of Android, so we make use of the static method on the MenuItemCompat type: MenuItemCompat.SetActionView(menuItem, someView); The action bar While adding an action bar on older Android versions, it is important to inherit it from the AppCompatActivity type. This type includes all the logic required for including an action bar in the app. It also provides many different methods and properties for accessing and configuring the action bar. In newer versions of Android, all the features are included in the Activity type. Although the functionality is the same, we do have to access the various pieces using the support members when using the support libraries. An example would be to use the SupportActionBar property instead of the ActionBar property. If we use the ActionBar property, the app will crash on devices that don't natively support the ActionBar property. In order to render the action bar, the activity needs to use a theme that contains a style for the action bar or one that inherits from such a theme. For the older versions of Android, we can use the AppCompat themes, such as Theme.AppCompat. The toolbar With the release of Android version 5.0, Google introduced a new style of action bar. The new Toolbar type performs the same function as the action bar but can be placed anywhere on the screen. The action bar is always placed at the top of the screen, but a toolbar is not restricted to that location and can even be placed inside other layouts. To make use of the Toolbar type, we can either use the native type, or we can use the type found in the support libraries. Like any Android View, we can add the ToolBar type to the layout: <android.support.v7.widget.Toolbar   android_id="@+id/my_toolbar"   android_layout_width="match_parent"   android_layout_height="?attr/actionBarSize"   android_background="?attr/colorPrimary"   android_elevation="4dp"   android_theme="@style/ThemeOverlay.AppCompat.ActionBar"   app_popupTheme="@style/ThemeOverlay.AppCompat.Light"/> The difference is in how the activity is set up. First, as we are not going to use the default ActionBar property, we can use the Theme.AppCompat.NoActionBar theme. Then, we have to let the activity know which view is used as the Toolbar type: var toolbar = FindViewById<Toolbar>(Resource.Id.toolbar); SetSupportActionBar(toolbar); The action bar items Action item buttons are just traditional options menu items but are optionally always visible on the action bar. The underlying logic to handle item selections is the same as that for the traditional options menu. No change is required to be made to the existing code inside the OnOptionsItemSelected() method. The value of the showAsAction attribute can be ifRoom, never, or always. This value can optionally be combined, using a pipe, with withText and/or collapseActionView. There's more... Besides using the Android Support Libraries to handle different versions, there is another way to handle different versions at runtime. Android provides the version number of the current operating system through the Build.VERSION type. This type has a property, SdkInt, which we use to detect the current version. It represents the current API level of the version. Each version of Android has a series of updates and new features. For example, Android 4 has numerous updates since its initial release, new features being added each time. Sometimes, the support library cannot cover all the cases, and we have to write specific code for particular versions: int apiLevel = (int)Build.VERSION.SdkInt; if (Build.VERSION.SdkInt >= BuildVersionCodes.IceCreamSandwich) {   // Android version 4.0 and above } else {   // Android versions below version 4.0 } Although the preceding can be done, it introduces spaghetti code and should be avoided. In addition to different code, the app may behave differently on different versions, even if the support library could have handled it. We will now have to manage these differences ourselves each time a new version of Android is released. Summary In this article, we learned that as the technology grows, new features are added and released in Android and older devices are often left behind. Thus, using the given steps we can add the new features of the later versions of Android to the older versions of Android, all we need to do is add packages by following the simple steps given in here. Resources for Article: Further resources on this subject: Creating the POI ListView layout [article] Code Sharing Between iOS and Android [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 4064

article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 11520

article-image-big-data-analytics
Packt
03 Nov 2015
10 min read
Save for later

Big Data Analytics

Packt
03 Nov 2015
10 min read
In this article, Dmitry Anoshin, the author of Learning Hunk will talk about Hadoop—how to extract Hunk to VM to set up a connection with Hadoop to create dashboards. We are living in a century of information technology. There are a lot of electronic devices around us that generate a lot of data. For example, you can surf on the Internet, visit a couple news portals, order new Airmax on the web store, write a couple of messages to your friend, and chat on Facebook. Every action produces data; we can multiply these actions with the number of people who have access to the Internet or just use a mobile phone and we will get really big data. Of course, you have a question, how big is it? I suppose, now it starts from terabytes or even petabytes. The volume is not the only issue; we struggle with a variety of data. As a result, it is not enough to analyze only structure data. We should dive deep into the unstructured data, such as machine data, that are generated by various machines. World famous enterprises try to collect this extremely big data in order to monetize it and find business insights. Big data offers us new opportunities, for example, we can enrich customer data via social networks using the APIs of Facebook or Twitter. We can build customer profiles and try to predict customer wishes in order to sell our product or improve customer experience. It is easy to say, but difficult to do. However, organizations try to overcome these challenges and use big data stores, such as Hadoop. (For more resources related to this topic, see here.) The big problem Hadoop is a distributed file system and framework to compute. It is relatively easy to get data into Hadoop. There are plenty of tools to get data into different formats. However, it is extremely difficult to get value out of these data that you put into Hadoop. Let's look at the path from data to value. First, we have to start at the collection of data. Then, we also spend a lot of time preparing and making sure this data is available for analysis while being able to ask questions to this data. It looks as follows: Unfortunately, the questions that you asked are not good or the answers that you got are not clear, and you have to repeat this cycle over again. Maybe, you have transformed and formatted your data. In other words, it is a long and challenging process. What you actually want is something to collect data; spend some time preparing the data, then you would able to ask question and get answers from data repetitively. Now, you can spend a lot of time asking multiple questions. In addition, you are able to iterate with data on those questions to refine the answers that you are looking for. The elegant solution What if we could take Splunk and put it on top of all these data stored in Hadoop? And it was, what the Splunk company actually did. The following figure shows how we got Hunk as name of the new product: Let's discuss some solution goals Hunk inventors were thinking about when they were planning Hunk: Splunk can take data from Hadoop via the Splunk Hadoop Connection app. However, it is a bad idea to copy massive data from Hadoop to Splunk. It is much better to process data in place because Hadoop provides both storage and computation and why not take advantage of both. Splunk has extremely powerful Splunk Processing Language (SPL) and it is a kind of advantage of Splunk, because it has a wide range of analytic functions. This is why it is a good idea to keep SPL in the new product. Splunk has true schema on the fly. The data that we store in Hadoop changes constantly. So, Hunk should be able to build schema on the fly, independent from the format of the data. It's a very good idea to have the ability to make previews. As you know, when a search is going on, you would able to get incremental results. It can dramatically reduce the outage. For example, we don't need to wait till the MapReduce job is finished. We can look at the incremental result and, in the case of a wrong result, restart a search query. The deployment of Hadoop is not easy; Splunk tries to make the installation and configuration of Hunk easy for us. Getting up Hunk In order to start exploring the Hadoop data, we have to install Hunk on the top of our Hadoop cluster. Hunk is easy to install and configure. Let's learn how to deploy Hunk Version 6.2.1 on top of the existing CDH cluster. It's assumed that your VM is up and running. Extracting Hunk to VM To extract Hunk to VM, perform the following steps: Open the console application. Run ls -la to see the list of files in your Home directory: [cloudera@quickstart ~]$ cd ~ [cloudera@quickstart ~]$ ls -la | grep hunk -rw-r--r--   1 root     root     113913609 Mar 23 04:09 hunk-6.2.1-249325-Linux-x86_64.tgz Unpack the archive: cd /opt sudo tar xvzf /home/cloudera/hunk-6.2.1-249325-Linux-x86_64.tgz -C /opt Setting up Hunk variables and configuration files Perform the following steps to set up the Hunk variables and configuration files It's time to set the SPLUNK_HOME environment variable. This variable is already added to the profile; it is just to bring to your attention that this variable must be set: export SPLUNK_HOME=/opt/hunk Use default splunk-launch.conf. This is the basic properties file used by the Hunk service. We don't have to change there something special, so let's use the default settings: sudocp /opt/hunk/etc/splunk-launch.conf.default /opt/hunk//etc/splunk-launch.conf Running Hunk for the first time Perform the following steps to run Hunk: Run Hunk: sudo /opt/hunk/bin/splunk start --accept-license Here is the sample output from the first run: sudo /opt/hunk/bin/splunk start --accept-license This appears to be your first time running this version of Splunk. Copying '/opt/hunk/etc/openldap/ldap.conf.default' to '/opt/hunk/etc/openldap/ldap.conf'. Generating RSA private key, 1024 bit long modulus Some output lines were deleted to reduce amount of log text Waiting for web server at http://127.0.0.1:8000 to be available.... Done If you get stuck, we're here to help. Look for answers here: http://docs.splunk.com The Splunk web interface is at http://vm-cluster-node1.localdomain:8000 Setting up a data provider and virtual index for the CDR data We need to accomplish two tasks: provide a technical connector to the underlying data storage and create a virtual index for the data on this storage. Log in to http://quickstart.cloudera:8000. The system would ask you to change the default admin user password. I did set it to admin: Setting up a connection to Hadoop Right now, we are ready to set up the integration between Hadoop and Hunk. At first, we need to specify the way Hunk connects to the current Hadoop installation. We are using the most recent way: YARN with MR2. Then, we have to point virtual indexes to the data stored on Hadoop. To do this, perform the following steps: Click on Explore Data. Click on Create a provider: Let's fill the form to create the data provider: Property name Value Name hadoop-hunk-provider Java home /usr/java/jdk1.7.0_67-cloudera Hadoop home /usr/lib/hadoop Hadoop version Hadoop 2.x, (Yarn) filesystem hdfs://quickstart.cloudera:8020 Resource Manager Address quickstart.cloudera:8032 Resource Scheduler Address quickstart.cloudera:8030 HDFS Working Directory /user/hunk Job Queue default You don't have to modify any other properties. The HDFS working directory has been created for you in advance. You can create it using the following command: sudo -u hdfshadoop fs -mkdir -p /user/hunk If you did everything correctly, you should see a screen similar to the following screenshot: Let's discuss briefly what we have done: We told Hunk where Hadoop home and Java are. Hunk uses Hadoop streaming internally, so it needs to know how to call Java and Hadoop streaming. You can inspect the submitted jobs from Hunk (discussed later) and see the following lines: /opt/hunk/bin/jars/sudobash /usr/bin/hadoop jar "/opt/hunk/bin/jars/SplunkMR-s6.0-hy2.0.jar" "com.splunk.mr.SplunkMR" MapReduce JAR is submitted by Hunk. Also, we need to tell Hunk where the YARN Resource Manager and Scheduler are located. These services allow us to ask for cluster resources and run jobs. Job queue could be useful in the production environment. You could have several queues for cluster resource distribution in real life. We would set queue name as default, since we are not discussing cluster utilization and load balancing. Setting up a virtual index for the data stored in Hadoop Now it's time to create virtual index. We are going to add the dataset with the avro files to the virtual index as an example data. Click on Explore Data and then click on Create a virtual index: You'll get a message telling that there are no indexes: Just click on New Virtual Index. A virtual index is a metadata. It tells Hunk where the data is located and what provider should be used to read the data. Property name Value Name milano_cdr_aggregated_10_min_activity Path to data in HDFS /masterdata/stream/milano_cdr Here is an example screen you should see after you create your first virtual index: Accessing data through the virtual index To access data through the virtual index, perform the following steps: Click on Explore Data and select a provider and virtual index: Select part-m-00000.avro by clicking on it. The Next button will be activated after you pick up a file: Preview data in the Preview Data tab. You should see how Hunk automatically for timestamp from our CDR data: Pay attention to the Time column and the field named Time_interval from the Event column. The time_interval column keeps the time of record. Hunk should automatically use that field as a time field: Save the source type by clicking on Save As and then Next: In the Entering Context Settings page, select search in the App context drop-down box. Then, navigate to Sharing context | All apps and then click on Next. The last step allows you to review what we've done: Click on Finish to create the finalized wizard. Creating a dashbord Now it's time to see how the dashboards work. Let's find the regions where the visitors face problems (status = 500) while using our online store: index="digital_analytics" status=500 | iplocation clientip | geostats latfield=lat longfield=lon count by Country You should see the map and the portions of error for the countries: Now let's save it as dashboard. Click on Save as and select Dashboard panel from drop-down menu. Name it as Web Operations. You should get a new dashboard with a single panel and our report on it. We have several previously created reports. Let's add them to the newly created dashboard using separate panels: Click on Edit and then Edit panels. Select Add new panel and then New from report, and add one of our previous reports. Summary In this article, you learned how to extract Hunk to VM. We also saw how to set up Hunk variables and configuration files. You learned how to run Hunk and how to set up the data provided and a virtual index for the CDR data. Setting up a connection to Hadoop and a virtual index for the data stored in Hadoop were also covered in detail. Apart from these, you also learned how to create a dashboard. Resources for Article: Further resources on this subject: Identifying Big Data Evidence in Hadoop [Article] Big Data [Article] Understanding Hadoop Backup and Recovery Needs [Article]
Read more
  • 0
  • 0
  • 2963
article-image-moving-spatial-data-one-format-another
Packt
03 Nov 2015
29 min read
Save for later

Moving Spatial Data From One Format to Another

Packt
03 Nov 2015
29 min read
In this article by Michael Diener, author of the Python Geospatial Analysis Cookbook, we will cover the following topics: Converting a Shapefile to a PostGIS table using ogr2ogr Batch importing a folder of Shapefiles into PostGIS using ogr2ogr Batch exporting a list of tables from PostGIS to Shapefiles Converting an OpenStreetMap (OSM) XML to a Shapefile Converting a Shapefile (vector) to a GeoTiff (raster) Converting a GeoTiff (raster) to a Shapefile (vector) using GDAL (For more resources related to this topic, see here.) Introduction Geospatial data comes in hundreds of formats and massaging this data from one format to another is a simple task. The ability to convert data types, such as rasters or vectors, belongs to data wrangling tasks that are involved in geospatial analysis. Here is an example of a raster and vector dataset so you can see what I am talking about: Source: Michael Diener drawing The best practice is to run analysis functions or models on data stored in a common format, such as a Postgresql PostGIS database or a set of Shapefiles, in a common coordinate system. For example, running analysis on input data stored in multiple formats is also possible, but you can expect to find the devil in the details of your results if something goes wrong or your results are not what you expect. This article looks at some common data formats and demonstrates how to move these formats around from one to another with the most common tools. Converting a Shapefile to a PostGIS table using ogr2ogr The simplest way to transform data from one format to another is to directly use the ogr2ogr tool that comes with the installation of GDAL. This powerful tool can convert over 200 geospatial formats. In this solution, we will execute the ogr2ogr utility from within a Python script to execute generic vector data conversions. The python code is, therefore, used to execute this command-line tool and pass around variables that are needed to create your own scripts for data imports or exports. The use of this tool is also recommended if you are not really interested in coding too much and simply want to get the job done to move your data. A pure python solution is, of course, possible but it is definitely targeted more at developers (or python purists). Getting ready To run this script, you will need the GDAL utility application installed on your system. Windows users can visit OSGeo4W (http://trac.osgeo.org/osgeo4w) and download the 32-bit or 64-bit Windows installer as follows: Simply double-click on the installer to start it. Navigate to the bottommost option, Advanced Installation | Next. Click on Next to download from the Internet (this is the first default option). Click on Next to accept default location of path or change to your liking. Click on Next to accept the location of local saved downloads (default). Click on Next to accept the direct connection (default). Click on Next to select a default download site. Now, you should finally see the menu. Click on + to open the command-line utilities and you should see the following: Now, select gdal. The GDAL/OGR library and command line tools to install it. Click on Next to start downloading it, and then install it. For Ubuntu Linux users, use the following steps for installation: Execute this simple one-line command: $ sudo apt-get install gdal-bin This will get you up and running so that you can execute ogr2ogr directly from your terminal. Next, set up your Postgresql database using the PostGIS extension. First, we will create a new user to manage our new database and tables: Sudo su createuser  –U postgres –P pluto Enter a password for the new role. Enter the password again for the new role. Enter a password for postgres users since you're going to create a user with the help of the postgres user.The –P option prompts you to give the new user, called pluto, a password. For the following examples, our password is stars; I would recommend a much more secure password for your production database. Setting up your Postgresql database with the PostGIS extension in Windows is the same as setting it up in Ubuntu Linux. Perform the following steps to do this: Navigate to the c:Program FilesPostgreSQL9.3bin folder. Then, execute this command and follow the on-screen instructions as mentioned previously: Createuser.exe –U postgres –P pluto To create the database, we will use the command-line createdb command similar to the postgres user to create a database named py_geoan_cb. We will then assign the pluto user to be the database owner; here is the command to do this: $ sudo su createdb –O pluto –U postgres py_geoan_cb Windows users can visit the c:Program FilesPostgreSQL9.3bin and execute the createdb.exe command: createdb.exe –O pluto –U postgres py_geoan_cb Next, create the PostGIS extension for our newly created database: psql –U postgres -d py_geoan_cb -c "CREATE EXTENSION postgis;" Windows users can also execute psql from within the c:Program FilesPostgreSQL9.3bin folder: psql.exe –U postgres –d py_geoan_cb –c "CREATE EXTENSION postgis;" Lastly, create a schema called geodata to store the new spatial table. It is common to store spatial data in another schema outside the Postgresql default schema, public. Create the schema as follows: For Ubuntu Linux users: sudo -u postgres psql -d py_geoan_cb -c "CREATE SCHEMA geodata AUTHORIZATION pluto;" For Windows users: psql.exe –U postgres –d py_geoan_cb –c "CREATE SCHEMA geodata AUTHORIZATION pluto;" How to do it... Now let's get into the actual importing of our Shapefile into a PostGIS database that will automatically create a new table from our Shapefile: #!/usr/bin/env python # -*- coding: utf-8 -*- import subprocess # database options db_schema = "SCHEMA=geodata" overwrite_option = "OVERWRITE=YES" geom_type = "MULTILINESTRING" output_format = "PostgreSQL" # database connection string db_connection = """PG:host=localhost port=5432   user=pluto dbname=py_test password=stars""" # input shapefile input_shp = "../geodata/bikeways.shp" # call ogr2ogr from python subprocess.call(["ogr2ogr","-lco", db_schema, "-lco", overwrite_option, "-nlt", geom_type, "-f", output_format, db_connection,  input_shp]) Now we can call our script from the command line: $ python ch03-01_shp2pg.py How it works... We begin with importing the standard python module subprocess that will call the ogr2ogr command-line tool. Next, we'll set a range of variables that are used as input arguments and various options for ogr2ogr to execute. Starting with the SCHEMA=geodata Postgresql database, we'll set a nondefault database schema for the destination of our new table. It is best practice to store your spatial data tables in a separate schema outside the "public" schema, which is set as the default. This practice will make backups and restores much easier and keep your database better organized. Next, we'll create a overwrite_option variable that's set to "yes" so that we can overwrite any table with the same name when its created. This is helpful when you want to completely replace the table with new data, otherwise, it is recommended to use the -append option. We'll also specify the geometry type because, sometimes, ogr2ogr does not always guess the correct geometry type of our Shapefile so setting this value saves you any worry. Now, setting our output_format variable with the PostgreSQL keyword tells ogr2ogr that we want to output data into a Postgresql database. This is then followed by the db_connection variable, which specifies our database connection information. Do not forget that the database must already exist along with the "geodata" schema, otherwise, we will get an error. The last input_shp variable gives the full path to our Shapefile, including the .shp file ending. Now, we will call the subprocess module ,which will then call the ogr2ogr command-line tool and pass along the variable options required to run the tool. We'll pass this function an array of arguments, the first object in the array being the ogr2ogr command-line tool name. After this, we'll pass each option after another in the array to complete the call. Subprocess can be used to call any command-line tool directly. It takes a list of parameters that are separated by spaces. This passing of parameters is quite fussy, so make sure you follow along closely and don't add any extra spaces or commas. Last but not least, we need to execute our script from the command line to actually import our Shapefile by calling the python interpreter and passing the script. Then, head over to the PgAdmin Postgresql database viewer and see if it worked, or even better, open up Quantum GIS (www.qgis.org) and take a look at the newly created tables. See also If you would like to see the full list of options available with the ogr2ogr command, simple enter the following in the command line: $ ogr2ogr –help You will see the full list of options that are available. Also, visit http://gdal.org/ogr2ogr.html to read the required documentation. Batch importing a folder of Shapefiles into PostGIS using ogr2ogr We would like to extend our last script to loop over a folder full of Shapefiles and import them into PostGIS. Most importing tasks involve more than one file to import so this makes it a very practical task. How to do it... The following steps will batch import a folder of Shapefiles into PostGIS using ogr2ogr: Our script will reuse the previous code in the form of a function so that we can batch process a list of Shapefiles to import into the Postgresql PostGIS database. We will create our list of Shapefiles from a single folder for the sake of simplicity: #!/usr/bin/env python # -*- coding: utf-8 -*- import subprocess import os import ogr def discover_geom_name(ogr_type):     """     :param ogr_type: ogr GetGeomType()     :return: string geometry type name     """     return {ogr.wkbUnknown            : "UNKNOWN",             ogr.wkbPoint              : "POINT",             ogr.wkbLineString         : "LINESTRING",             ogr.wkbPolygon            : "POLYGON",             ogr.wkbMultiPoint         : "MULTIPOINT",             ogr.wkbMultiLineString    : "MULTILINESTRING",             ogr.wkbMultiPolygon       : "MULTIPOLYGON",             ogr.wkbGeometryCollection : "GEOMETRYCOLLECTION",             ogr.wkbNone               : "NONE",             ogr.wkbLinearRing         : "LINEARRING"}.get(ogr_type) def run_shp2pg(input_shp):     """     input_shp is full path to shapefile including file ending     usage:  run_shp2pg('/home/geodata/myshape.shp')     """     db_schema = "SCHEMA=geodata"     db_connection = """PG:host=localhost port=5432                     user=pluto dbname=py_geoan_cb password=stars"""     output_format = "PostgreSQL"     overwrite_option = "OVERWRITE=YES"     shp_dataset = shp_driver.Open(input_shp)     layer = shp_dataset.GetLayer(0)     geometry_type = layer.GetLayerDefn().GetGeomType()     geometry_name = discover_geom_name(geometry_type)     print (geometry_name)     subprocess.call(["ogr2ogr", "-lco", db_schema, "-lco", overwrite_option,                      "-nlt", geometry_name, "-skipfailures",                      "-f", output_format, db_connection, input_shp]) # directory full of shapefiles shapefile_dir = os.path.realpath('../geodata') # define the ogr spatial driver type shp_driver = ogr.GetDriverByName('ESRI Shapefile') # empty list to hold names of all shapefils in directory shapefile_list = [] for shp_file in os.listdir(shapefile_dir):     if shp_file.endswith(".shp"):         # apped join path to file name to outpout "../geodata/myshape.shp"         full_shapefile_path = os.path.join(shapefile_dir, shp_file)         shapefile_list.append(full_shapefile_path) # loop over list of Shapefiles running our import function for each_shapefile in shapefile_list:     run_shp2pg(each_shapefile)  print ("importing Shapefile: " + each_shapefile) Now, we can simply run our new script from the command line once again: $ python ch03-02_batch_shp2pg.py How it works... Here, we will reuse our code from the previous script but have converted it into a python function called run_shp2pg (input_shp), which takes exactly one argument to complete the path to the Shapefile we want to import. The input argument must include a Shapefile ending with .shp. We have a helper function that will get the geometry type as a string by reading in the Shapefile feature layer and outputting the geometry type so that the ogr commands know what to expect. This does not always work and some errors can occur. The skipfailures option will plow over any errors that are thrown during insert and will still populate our tables. To begin with, we need to define the folder that contains all our Shapefiles to be imported. Next up, we'll create an empty list object called shapefile_list that will hold a list of all the Shapefiles we want to import. The first for loop is used to get the list of all the Shapefiles in the directory specified using the standard python os.listdir() function. We do not want all the files in this folder; we only want files with ending with .shp, hence, the if statement will evaluate to True if the file ends with .shp. Once the .shp file is found, we need to append the file path to the file name to create a single string that holds the path plus the Shapefile name, and this is our variable called full_shapefile_path. The final part of this is to add each new file with its attached path to our shapefile_list list object. So, we now have our final list to loop through. It is time to loop through each Shapefile in our new list and run our run_shp2pg(input_shp) function for each Shapefile in the list by importing it into our Postgresql PostGIS database. See also If you have a lot of Shapefiles, and by this I mean mean hundred or more Shapefiles, performance will be one consideration and will, therefore, indicate that there are a lot of machines with free resources. Batch exporting a list of tables from PostGIS to Shapefiles We will now change directions and take a look at how we can batch export a list of tables from our PostGIS database into a folder of Shapefiles. We'll again use the ogr2ogr command-line tool from within a python script so that you can include it in your application programming workflow. Near the end, you can also see how this all of works in one single command line. How to do it... The script will fire the ogr2ogr command and loop over a list of tables to export the Shapefile format into an existing folder. So, let's take a look at how to do this using the following code: #!/usr/bin/env python # -*- coding: utf-8 -*- # import subprocess import os # folder to hold output Shapefiles destination_dir = os.path.realpath('../geodata/temp') # list of postGIS tables postgis_tables_list = ["bikeways", "highest_mountains"] # database connection parameters db_connection = """PG:host=localhost port=5432 user=pluto         dbname=py_geoan_cb password=stars active_schema=geodata""" output_format = "ESRI Shapefile" # check if destination directory exists if not os.path.isdir(destination_dir):     os.mkdir(destination_dir)     for table in postgis_tables_list:         subprocess.call(["ogr2ogr", "-f", output_format, destination_dir,                          db_connection, table])         print("running ogr2ogr on table: " + table) else:     print("oh no your destination directory " + destination_dir +           " already exist please remove it then run again") # commandline call without using python will look like this # ogr2ogr -f "ESRI Shapefile" mydatadump # PG:"host=myhost user=myloginname dbname=mydbname password=mypassword"   neighborhood parcels Now, we'll call our script from the command line as follows: $ python ch03-03_batch_postgis2shp.py How it works... Beginning with a simple import of our subprocess and os modules, we'll immediately define our destination directory where we want to store the exported Shapefiles. This variable is followed by the list of table names that we want to export. This list can only include files located in the same Postgresql schema. The schema is defined as the active_schema so that ogr2ogr knows where to find the tables to be exported. Once again, we'll define the output format as ESRI Shapefile. Now, we'll check whether the destination folder exists. If it does, continue and call our loop. Then, loop through the list of tables stored in our postgis_tables_list variable. If the destination folder does not exist, you will see an error printed on the screen. There's more... If you are programming an application, then executing the ogr2ogr command from inside your script is definitely quick and easy. On the other hand, for a one-off job, simply executing the command-line tool is what you want when you export your list of Shapefiles. To do this in a one-liner, please take a look at the following information box. A one line example of calling the ogr2ogr batch of the PostGIS table to Shapefiles is shown here if you simply want to execute this once and not in a scripting environment: ogr2ogr -f "ESRI Shapefile" /home/ch03/geodata/temp PG:"host=localhost user=pluto dbname=py_geoan_cb password=stars" bikeways highest_mountains The list of tables you want to export is located as a list separated by spaces. The destination location of the exported Shapefiles is ../geodata/temp. Note this this /temp directory must exist. Converting an OpenStreetMap (OSM) XML to a Shapefile OpenStreetMap (OSM) has a wealth of free data, but to use it with most other applications, we need to convert it to another format, such as a Shapefile or Postgresql PostGIS database. This recipe will use the ogr2ogr tool to do the conversion for us within a python script. The benefit derived here is simplicity. Getting ready To get started, you will need to download the OSM data at http://www.openstreetmap.org/export#map=17/37.80721/-122.47305 and saving the file (.osm) to your /ch03/geodata directory. The download button is located on the bar on the left-hand side, and when pressed, it should immediately start the download (refer to the following image). The area we are testing is from San Francisco, just before the golden gate bridge. If you choose to download another area from OSM feel free to, but make sure that you take a small area like preceding example link. If you select a larger area, the OSM web tool will give you a warning and disable the download button. The reason is simple: if the dataset is very large, it will most likely be better suited for another tool, such as osm2pgsql (http://wiki.openstreetmap.org/wiki/Osm2pgsql), for you conversion. If you need to get OSM data for a large area and want to export to Shapefile, it would be advisable to use another tool, such as osm2pgsql, which will first import your data to a Postgresql database. Then, export from the PostGIS database to Shapefile using the pgsql2shp tool.  A python tool to import OSM data into a PostGIS database is also available and is called imposm (located here at http://imposm.org/). Version 2 of it is written in python and version 3 is written in the "go" programming language if you want to give it a try. How to do it... Using the subprocess module, we will execute ogr2ogr to convert the OSM data that we downloaded into a new Shapefile: #!/usr/bin/env python # -*- coding: utf-8 -*- # convert / import osm xml .osm file into a Shapefile import subprocess import os import shutil # specify output format output_format = "ESRI Shapefile" # complete path to input OSM xml file .osm input_osm = '../geodata/OSM_san_francisco_westbluff.osm' # Windows users can uncomment these two lines if needed # ogr2ogr = r"c:/OSGeo4W/bin/ogr2ogr.exe" # ogr_info = r"c:/OSGeo4W/bin/ogrinfo.exe" # view what geometry types are available in our OSM file subprocess.call([ogr_info, input_osm]) destination_dir = os.path.realpath('../geodata/temp') if os.path.isdir(destination_dir):     # remove output folder if it exists     shutil.rmtree(destination_dir)     print("removing existing directory : " + destination_dir)     # create new output folder     os.mkdir(destination_dir)     print("creating new directory : " + destination_dir)     # list of geometry types to convert to Shapefile     geom_types = ["lines", "points", "multilinestrings", "multipolygons"]     # create a new Shapefile for each geometry type     for g_type in geom_types:         subprocess.call([ogr2ogr,                "-skipfailures", "-f", output_format,                  destination_dir, input_osm,                  "layer", g_type,                  "--config","OSM_USE_CUSTOM_INDEXING", "NO"])         print("done creating " + g_type) # if you like to export to SPATIALITE from .osm # subprocess.call([ogr2ogr, "-skipfailures", "-f", #         "SQLITE", "-dsco", "SPATIALITE=YES", #         "my2.sqlite", input_osm]) Now we can call our script from the command line: $ python ch03-04_osm2shp.py Go and have a look at your ../geodata folder to see the newly created Shapefiles, and try to open them up in Quantum GIS, which is a free GIS software (www.qgis.org) How it works... This script should be clear as we are using the subprocess module call to fire our ogr2ogr command-line tool. Specify our OSM dataset as an input file, including the full path to the file. The Shapefile name is not supplied as ogr2ogr and will output a set of Shapefiles, one for each geometry shape according to the geometry type it finds inside the OSM file. We only need to specify the name of the folder where we want ogr2ogr to export the Shapefiles to, automatically creating the folder if it does not exist. Windows users: if you do not have your ogr2ogr tool mapped to your environment variables, you can simply uncomment at lines 16 and 17 in the preceding code and replace the path shown with the path on your machine to the Windows executables. The first subprocess call prints out the screen that the geometry types have found inside the OSM file. This is helpful in most cases to help you identify what is available. Shapefiles can only support one geometry type per file, and this is why ogr2ogr outputs a folder full of Shapefiles, each one representing a separate geometry type. Lastly, we'll call subprocess to execute ogr2ogr, passing in the output "ESRI Shapefile" file type, output folder, and the name of the OSM dataset. Converting a Shapefile (vector) to a GeoTiff (raster) Moving data from format to format also includes moving from vector to raster or the other way around. In this recipe, we move from a vector (Shapefile) to a raster (GeoTiff) with the python gdal and ogr modules. Getting ready We need to be inside our virtual environment again, so fire it up so that we can access our gdal and ogr python modules. As usual, enter your python virtual environment with the workon pygeoan_cb command: $ source venvs/pygeoan_cb/bin/activate How to do it... Let's dive in and convert our golf course polygon Shapefile into a GeoTif; here is the code to do this: Import the ogr and gdal libraries, and then define the output pixel size along with the value that will be assigned to null: #!/usr/bin/env python # -*- coding: utf-8 -*- from osgeo import ogr from osgeo import gdal # set pixel size pixel_size = 1no_data_value = -9999 Set up the input Shapefile that we want to convert alongside the new GeoTiff raster that will be created when the script is executed: # Shapefile input name # input projection must be in cartesian system in meters # input wgs 84 or EPSG: 4326 will NOT work!!! input_shp = r'../geodata/ply_golfcourse-strasslach3857.shp' # TIF Raster file to be created output_raster = r'../geodata/ply_golfcourse-strasslach.tif' Now we need to create the input Shapefile object, so get the layer information and finally set the extent values: # Open the data source get the layer object # assign extent coordinates open_shp = ogr.Open(input_shp) shp_layer = open_shp.GetLayer() x_min, x_max, y_min, y_max = shp_layer.GetExtent() Here, we need to calculate the resolution distance to pixel value: # calculate raster resolution x_res = int((x_max - x_min) / pixel_size) y_res = int((y_max - y_min) / pixel_size) Our new raster type is a GeoTiff so we must explicitly tell this gdal to get the driver. The driver is then able to create a new GeoTiff by passing in the filename or the new raster we want to create. The x direction resolution is followed by the y direction resolution, and then our number of bands, which is, in this case, 1. Lastly, we'll set the new type of GDT_Byte raster: # set the image type for export image_type = 'GTiff' driver = gdal.GetDriverByName(image_type)   new_raster = driver.Create(output_raster, x_res, y_res, 1, gdal.GDT_Byte) new_raster.SetGeoTransform((x_min, pixel_size, 0, y_max, 0, -   pixel_size)) Now we can access the new raster band and assign the no data values and the inner data values for the new raster. All the inner values will receive a value of 255 similar to what we set for the burn_values variable: # get the raster band we want to export too raster_band = new_raster.GetRasterBand(1) # assign the no data value to empty cells raster_band.SetNoDataValue(no_data_value) # run vector to raster on new raster with input Shapefile gdal.RasterizeLayer(new_raster, [1], shp_layer, burn_values=[255]) Here we go! Lets run the script to see what our new raster looks like: $ python ch03-05_shp2raster.py The resulting raster should look like this if you open it  using QGIS (http://www.qgis.org): How it works... There are several steps involved in this code so please follow along as some points could lead to trouble if you are not sure what values to input. We start with the import of the gdal and ogr modules, respectively, since they will do the work for us by inputting a Shapefile (vector) and outputting a GeoTiff (raster). The pixel_size variable is very important since it will determine the size of the new raster we will create. In this example, we only have two polygons, so we'll set pixel_size = 1 to keep a fine border. If you have many polygons stretching across the globe in one Shapefile, it is wiser to set this value to 25 or more. Otherwise, you could end up with a 10 GB raster and your machine will run all night long! The no_data_value parameter is needed to tell GDAL what values to set in the empty space around our input polygons, and we set these to -9999 in order to be easily identified. Next, we'll simply set the input Shapefile stored in the EPSG:3857 web mercator and output GeoTiff. Check to make sure that you change the file names accordingly if you want to use some other dataset. We start by working with the OGR module to open the Shapefile and retrieve its layer information and the extent information. The extent is important because it is used to calculate the size of the output raster width and height values, which must be integers that are represented by the x_res and y_res variables. Note that the projection of your Shapefile must be in meters not degrees. This is very important since this will NOT work in EPSG:4326 or WGS 84, for example. The reason is that the coordinate units are LAT/LON. This means that WGS84 is not a flat plane projection and cannot be drawn as is. Our x_res and y_res values would evaluate to 0 since we cannot get a real ratio using degrees. This is a result of use not being able to simply subtract coordinate x from coordinate y because the units are in degrees and not in a flat plane meter projection. Now moving on to the raster setup, we'll define the type of raster we want to export as a Gtiff. Then, we can get the correct GDAL driver by the raster type. Once the raster type is set, we can create a new empty raster dataset, passing in the raster file name, width, and height of the raster in pixels, number of raster bands, and finally, the type of raster in GDAL terms, that is, the gdal.GDT_Byte. These five parameters are mandatory to create a new raster. Next, we'll call SetGeoTransform that handles transforming between pixel/line raster space and projection coordinates space. We want to activate the band 1 as it is the only band we have in our raster. Then, we'll assign the no data value for all our empty space around the polygon. The final step is to call the gdal.RasterizeLayer() function and pass in our new raster band Shapefile and the value to assign to the inside of our raster. The value of all the pixels inside our polygon will be 255. See also If you are interested, you can visit the command-line tool gdal_rasterize at http://www.gdal.org/gdal_rasterize.html. You can run this straight from the command line. Converting a raster (GeoTiff) to a vector (Shapefile) using GDAL We have now looked at how we can go from vector to raster, so it is time to go from raster to vector. This method is much more common because most of our vector data is derived from remotely sensed data such as satellite images, orthophotos, or some other remote sensing dataset such as lidar. Getting ready As usual, please enter your python virtual environment with the help of the workon pygeoan_cb command: $ source venvs/pygeoan_cb/bin/activate How to do it... Now let's begin: Import the ogr and gdal modules. Go straight ahead and open the raster that we want to convert by passing it the file name on disk. Then, get the raster band: #!/usr/bin/env python # -*- coding: utf-8 -*- from osgeo import ogr from osgeo import gdal #  get raster datasource open_image = gdal.Open( "../geodata/cadaster_borders-2tone-black-   white.png") input_band = open_image.GetRasterBand(3) Setup the output vector file as a Shapefile with output_shp, and then get the Shapefile driver. Now, we can create the output from our driver and create the layer: #  create output datasource output_shp = "../geodata/cadaster_raster" shp_driver = ogr.GetDriverByName("ESRI Shapefile") # create output file name output_shapefile = shp_driver.CreateDataSource( output_shp + ".shp" ) new_shapefile = output_shapefile.CreateLayer(output_shp, srs = None ) Our final step is to run the gdal.Polygonize function that does the heavy lifting by converting our raster to vector. gdal.Polygonize(input_band, None, new_shapefile, -1, [], callback=None) new_shapefile.SyncToDisk() Execute the new script. $ python ch03-06_raster2shp.py How it works... Working with ogr and gdal is similar in all our recipes; we must define the inputs and get the appropriate file driver to open the files. The GDAL library is very powerful and in only one line of code, we can convert a raster to a vector with the gdal. Polygonize function. All the preceding code is simply setup code to define which format we want to work with. We can then set up the appropriate driver to input and output our new file. Summary In this article we covered converting a Shapefile to a PostGIS table using ogr2ogr, batch importing a folder of Shapefiles into PostGIS using ogr2ogr, batch exporting a list of tables from PostGIS to Shapefiles, converting an OpenStreetMap (OSM) XML to a Shapefile, converting a Shapefile (vector) to a GeoTiff (raster), and converting a GeoTiff (raster) to a Shapefile (vector) using GDAL Resources for Article: Further resources on this subject: The Essentials of Working with Python Collections[article] Symbolizers[article] Preparing to Build Your Own GIS Application [article]
Read more
  • 0
  • 0
  • 4135

article-image-html5-apis
Packt
03 Nov 2015
6 min read
Save for later

HTML5 APIs

Packt
03 Nov 2015
6 min read
 In this article by Dmitry Sheiko author of the book JavaScript Unlocked we will create our first web component. (For more resources related to this topic, see here.) Creating the first web component You might be familiar with HTML5 video element (http://www.w3.org/TR/html5/embedded-content-0.html#the-video-element). By placing a single element in your HTML, you will get a widget that runs a video. This element accepts a number of attributes to set up the player. If you want to enhance this, you can use its public API and subscribe listeners on its events (http://www.w3.org/2010/05/video/mediaevents.html). So, we reuse this element whenever we need a player and only customize it for project-relevant look and feel. If only we had enough of these elements to pick every time we needed a widget on a page. However, this is not the right way to include any widget that we may need in an HTML specification. However, the API to create custom elements, such as video, is already there. We can really define an element, package the compounds (JavaScript, HTML, CSS, images, and so on), and then just link it from the consuming HTML. In other words, we can create an independent and reusable web component, which we then use by placing the corresponding custom element (<my-widget />) in our HTML. We can restyle the element, and if needed, we can utilize the element API and events. For example, if you need a date picker, you can take an existing web component, let's say the one available at http://component.kitchen/components/x-tag/datepicker. All that we have to do is download the component sources (for example, using browser package manager) and link to the component from our HTML code: <link rel="import" href="bower_components/x-tag-datepicker/src/datepicker.js"> Declare the component in the HTML code: <x-datepicker name="2012-02-02"></x-datepicker> This is supposed to go smoothly in the latest versions of Chrome, but this won't probably work in other browsers. Running a web component requires a number of new technologies to be unlocked in a client browser, such as Custom Elements, HTML Imports, Shadow DOM, and templates. The templates include the JavaScript templates. The Custom Element API allows us to define new HTML elements, their behavior, and properties. The Shadow DOM encapsulates a DOM subtree required by a custom element. And support of HTML Imports assumes that by a given link the user-agent enables a web-component by including its HTML on a page. We can use a polyfill (http://webcomponents.org/) to ensure support for all of the required technologies in all the major browsers: <script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script> Do you fancy writing your own web components? Let's do it. Our component acts similar to HTML's details/summary. When one clicks on summary, the details show up. So we create x-details.html, where we put component styles and JavaScript with component API: x-details.html <style> .x-details-summary { font-weight: bold; cursor: pointer; } .x-details-details { transition: opacity 0.2s ease-in-out, transform 0.2s ease-in-out; transform-origin: top left; } .x-details-hidden { opacity: 0; transform: scaleY(0); } </style> <script> "use strict"; /** * Object constructor representing x-details element * @param {Node} el */ var DetailsView = function( el ){ this.el = el; this.initialize(); }, // Creates an object based in the HTML Element prototype element = Object.create( HTMLElement.prototype ); /** @lend DetailsView.prototype */ Object.assign( DetailsView.prototype, { /** * @constracts DetailsView */ initialize: function(){ this.summary = this.renderSummary(); this.details = this.renderDetails(); this.summary.addEventListener( "click", this.onClick.bind( this ), false ); this.el.textContent = ""; this.el.appendChild( this.summary ); this.el.appendChild( this.details ); }, /** * Render summary element */ renderSummary: function(){ var div = document.createElement( "a" ); div.className = "x-details-summary"; div.textContent = this.el.dataset.summary; return div; }, /** * Render details element */ renderDetails: function(){ var div = document.createElement( "div" ); div.className = "x-details-details x-details-hidden"; div.textContent = this.el.textContent; return div; }, /** * Handle summary on click * @param {Event} e */ onClick: function( e ){ e.preventDefault(); if ( this.details.classList.contains( "x-details-hidden" ) ) { return this.open(); } this.close(); }, /** * Open details */ open: function(){ this.details.classList.toggle( "x-details-hidden", false ); }, /** * Close details */ close: function(){ this.details.classList.toggle( "x-details-hidden", true ); } }); // Fires when an instance of the element is created element.createdCallback = function() { this.detailsView = new DetailsView( this ); }; // Expose method open element.open = function(){ this.detailsView.open(); }; // Expose method close element.close = function(){ this.detailsView.close(); }; // Register the custom element document.registerElement( "x-details", { prototype: element }); </script> Further in JavaScript code, we create an element based on a generic HTML element (Object.create( HTMLElement.prototype )). Here we could inherit from a complex element (for example, video) if needed. We register a x-details custom element using the earlier one created as prototype. With element.createdCallback, we subscribe a handler that will be called when a custom element created. Here we attach our view to the element to enhance it with the functionality that we intend for it. Now we can use the component in HTML, as follows: <!DOCTYPE html> <html> <head> <title>X-DETAILS</title> <!-- Importing Web Component's Polyfill --> <!-- uncomment for non-Chrome browsers script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script--> <!-- Importing Custom Elements --> <link rel="import" href="./x-details.html"> </head> <body> <x-details data-summary="Click me"> Nunc iaculis ac erat eu porttitor. Curabitur facilisis ligula et urna egestas mollis. Aliquam eget consequat tellus. Sed ullamcorper ante est. In tortor lectus, ultrices vel ipsum eget, ultricies facilisis nisl. Suspendisse porttitor blandit arcu et imperdiet. </x-details> </body> </html> Summary This article covered basically how we can create our own custom advanced elements that can be easily reused, restyled, and enhanced. The assets required to render such elements are HTML, CSS, JavaScript, and images are bundled as Web Components. So, we literally can build the Web now from the components similar to how buildings are made from bricks. Resources for Article: Further resources on this subject: An Introduction to Kibana [article] Working On Your Bot [article] Icons [article]
Read more
  • 0
  • 0
  • 2369

article-image-how-keep-simple-django-app-and-running
Liz Tom
02 Nov 2015
4 min read
Save for later

How to Keep a Simple Django App Up and Running

Liz Tom
02 Nov 2015
4 min read
Welcome back. You might have seen my last blog post on how to deploy a simple Django app using AWS. Quick Summary: 1. Spin up an EC2 instance 2. Install nginx, django, gunicorn on your EC2 instance 3. Turn on gunicorn and nginx 4. Success. Well, it is success until you terminate your connection to your EC2 instance. How do we keep the app running even when we terminate gunicorn? Based on the recommendations from those wiser than I, we're going to experiment with Upstart today. From the Upstart website: > Upstart is an event-based replacement for the /sbin/init daemon > which handles starting of tasks and services during boot, stopping > them during shutdown and supervising them while the system is running. Basically, you can write jobs in Upstart not only to keep your app running but you'll be able to do asynchronous boot sequences instead of synchronous sequences. Let's get Started Make sure you have your EC2 instance configured as described in my last blog post. Also make sure nginx and gunicorn are both not running. nginx starts automatically so make sure you run: sudo service nginx stop Since we're using Ubuntu, Upstart comes already installed. You can check to see which version you have by running: initctl --version You should see a little something like this: initctl (upstart 1.12.1) Copyright (C) 2006-2014 Canonical Ltd., 2011 Scott James Remnant This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Time to make our First Job First step is to cd into /etc/init. If you ls you'll notice that there are a bunch of .conf files. We're going to be making our own! vimyjob.conf Hello World In order to write an Upstart job you need to make sure you either have a script block (called a stanza) or an exec script. description "My first Upstart job" start on runlevel [2345] stop on runlevel [!2345] script echo 'hello world' end script Now you can start this by running: sudo service myjob start To see your awesome handiwork: cd /var/log/upstart cat myjob.log You'll see your very first Upstart job. Something Useful Now we'll actually get something running. description "Gunicorn application server for Todo" start on runlevel [2345] stop on runlevel [!2345] respawn setuidubuntu setgid www-data chdir /home/ubuntu/project-folder exec projectvirtualenv/bin/gunicorn --workers 2 project.wsgi:application Save your file and now try: sudo service myjob start Visit your public IP address and blamo! You've got your Django app live for the world to see. Close out your terminal window. Is your app still running? It should be. Let's go over a few lines of what your job is doing. start on runlevel [2345] stop on runlevel [!2345] Basically this means we're going to run our service when the system is at runlevels 2, 3, 4 or 5. Then when the system is not at any of those (rebooting, shutting down, etc) we'll stop running our service. respawn This tells Upstart to restart our job if it fails. This means we don't need to worry about rerunning all of our commands every single time something goes down. In our case, everytime something fails, it will restart our todo app. setuidubuntu setgid www-data chdir /home/ubuntu/project-folder Next we're setting our user and the group owners and changing directories into our project directory so we can run gunicorn from the right place. execprojectvirtualenv/bin/gunicorn --workers 2 project.wsgi:application Since this is an Upstart job, we need to have at least one script stanza or one exec script. So we have our exec script that basically starts gunicorn with 2 workers. We can set all sorts of configuration for gunicorn here as well. If you're ever wondering if something went wrong and you want to try to troubleshoot, just check out your log: /var/log/upstart/myjob.conf If you want to find out more about Upstart, you should visit their site. This tutorial brushes just the tiniest surface ever of Upstart, there's a bunch more that you can have it do for you but every project has it's own needs. Hopefully this tutorial inspires you to go out there and figure out what else you can achieve with some fancy Upstart jobs of your own! About the Author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C.  Liz’s passion for full stack development and digital media makes her a natural fit at ISL.  Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 4780
article-image-exciting-features-haxeflixel
Packt
02 Nov 2015
4 min read
Save for later

The Exciting Features of HaxeFlixel

Packt
02 Nov 2015
4 min read
This article by Jeremy McCurdy, the author of the book Haxe Game Development Essentials, uncovers the exciting features of HaxeFlixel. When getting into cross-platform game development, it's often difficult to pick the best tool. There are a lot of engines and languages out there to do it, but when creating 2D games, one of the best options out there is HaxeFlixel. HaxeFlixel is a game engine written in the Haxe language. It is powered by the OpenFL framework. Haxe is a cross-platform language and compiler that allows you to write code and have it run on a multitude of platforms. OpenFL is a framework that expands the Haxe API and allows you to have easy ways to handle things such as rendering an audio in a uniform way across different platforms. Here's a rundown of what we'll look at: Core features Display Audio Input Other useful features Multiplatform support Advanced user interface support Visual effects  (For more resources related to this topic, see here.)  Core features HaxeFlixel is a 2D game engine, originally based off the Flash game engine Flixel. So, what makes it awesome? Let's start with the basic things you need: display, audio, and input. Display In HaxeFlixel, most visual elements are represented by objects using the FlxSprite class. This can be anything from spritesheet animations to shapes drawn through code. This provides you with a simple and consistent way of working with visual elements. Here's an example of how the FlxSprite objects are used: You can handle things such as layering by using the FlxGroup class, which does what its name implies—it groups things together. The FlxGroup class also can be used for collision detection (check whether objects from group A hit objects from group B). It also acts an object pool for better memory management. It's really versatile without feeling bloated. Everything visual is displayed by using the FlxCamera class. As the name implies, it's a game camera. It allows you to do things such as scrolling, having fullscreen visual effects, and zooming in or out of the page. Audio Sound effects and music are handled using a simple but effective sound frontend. It allows you to play sound effects and loop music clips with easy function calls. You can also manage the volume on a per sound basis, via global volume controls, or a mix of both. Input HaxeFlixel supports many methods of input. You can use mouse, touch, keyboard, or gamepad input. This allows you to support players on every platform easily. On desktop platforms, you can easily customize the mouse cursor without the need to write special functionalities. The built-in gamepad support covers mappings for the following controllers: Xbox PS3 PS4 OUYA Logitech Other useful features HaxeFlixel has a bunch of other cool features. This makes it a solid choice as a game engine. Among these are multiplatform support, advanced user interface support, and visual effects. Multi-platform support HaxeFlixel can be built for many different platforms. Much of this comes from it being built using OpenFL and its stellar cross-platform support. You can build desktop games that will work natively on Windows, Mac, and Linux. You can build mobile games for Android and iOS with relative ease. You can also target the Web by using Flash or the experimental support for HTML5. Advanced user interface support By using the flixel-ui add-on library, you can create complex game user interfaces. You can define and set up these interfaces with this by using XML configuration files. The flixel-ui library gives you access to a lot of different control types, such as 9-sliced images, the check/toggle buttons, text input, tabs, and drop-down menus. You can even localize UI text into different languages by using the firetongue library of Haxe. Visual effects Another add-on is the effects library. It allows you to warp and distort sprites by using the FlxGlitchSprite and FlxWaveSprite classes. You can also add trails to objects by using the FlxTrail class. Aside from the add-on library, HaxeFlixel also has built-in support for 2D particle effects, camera effects such as screen flashes and fades, and screen shake for an added impact. Summary In this article, we discussed several features of HaxeFlixel. This includes the core features of display, audio, and input. We also covered the additional features of multiplatform support, advanced user interface support, and visual effects. Resources for Article: Further resources on this subject: haXe 2: The Dynamic Type and Properties [article] Being Cross-platform with haXe [article] haXe 2: Using Templates [article]
Read more
  • 0
  • 0
  • 23481

article-image-working-local-and-remote-data-sources
Packt
02 Nov 2015
9 min read
Save for later

Working With Local and Remote Data Sources

Packt
02 Nov 2015
9 min read
In this article by Jason Kneen, the author of the book Appcelerator Titanium Smartphone Application Development Cookbook - Second Edition, we'll cover the following recipes: Reading data from remote XML via HTTPClient Displaying data using a TableView Enhancing your TableViews with custom rows Filtering your TableView with the SearchBar control Speeding up your remote data access with Yahoo! YQL and JSON Creating a SQLite database Saving data locally using a SQLite database Retrieving data from a SQLite database Creating a "pull to refresh" mechanism in iOS (For more resources related to this topic, see here.) As you are a Titanium developer, fully understanding the methods available for you to read, parse, and save data is fundamental to the success of the apps you'll build. Titanium provides you with all the tools you need to make everything from simple XML or JSON calls over HTTP, to the implementation of local relational SQL databases. In this article, we'll cover not only the fundamental methods of implementing remote data access over HTTP, but also how to store and present that data effectively using TableViews, TableRows, and other customized user interfaces. Prerequisites You should have a basic understanding of both the XML and JSON data formats, which are widely used and standardized methods of transporting data across the Web. Additionally, you should understand what Structured Query Language (SQL) is and how to create basic SQL statements such as Create, Select, Delete, and Insert. There is a great beginners' introduction to SQL at http://sqlzoo.net if you need to refer to tutorials on how to run common types of database queries. Reading data from remote XML via HTTPClient The ability to consume and display feed data from the Internet, via RSS feeds or alternate APIs, is the cornerstone of many mobile applications. More importantly, many services that you may wish to integrate into your app will probably require you to do this at some point or the other, so it is vital to understand and be able to implement remote data feeds and XML. Our first recipe in this article introduces some new functionality within Titanium to help facilitate this need. Getting ready To prepare for this recipe, open Titanium Studio, log in and create a new mobile project. Select Classic and Default Project. Then, enter MyRecipes as the name of the app, and fill in the rest of the details with your own information, as you've done previously. How to do it... Now that our project shell is set up, let's get down to business! First, open your app.js file and replace its contents with the following: // this sets the background color of the master View (when there are no windows/tab groups on it) Ti.UI.setBackgroundColor('#000'); // create tab group var tabGroup = Ti.UI.createTabGroup(); var tab1 = Ti.UI.createTab({ icon:'cake.png', title:'Recipes', window:win1 }); var tab2 = Ti.UI.createTab({ icon:'heart.png', title:'Favorites', window:win2 }); // // add tabs // tabGroup.addTab(tab1); tabGroup.addTab(tab2); // open tab group tabGroup.open(); This will get a basic TabGroup in place, but we need two windows, so we create two more JavaScript files called recipes.js and favorites.js. We'll be creating a Window instance in each file to do this we created the window2.js and chartwin.js files. In recipes.js, insert the following code. Do the same with favorites.js, ensuring that you change the title of the Window to Favorites: //create an instance of a window module.exports = (function() { var win = Ti.UI.createWindow({ title : 'Recipes', backgroundColor : '#fff' }); return win; })(); Next, go back to app.js, and just after the place where TabGroup is defined, add this code: var win1 = require("recipes"); var win2 = require("favorites"); Open the recipes.js file. This is the file that'll hold our code for retrieving and displaying recipes from an RSS feed. Type in the following code at the top of your recipes.js file; this code will create an HTTPClient and read in the feed XML from the recipe's website: //declare the http client object var xhr = Ti.Network.createHTTPClient(); function refresh() { //this method will process the remote data xhr.onload = function() { console.log(this.responseText); }; //this method will fire if there's an error in accessing the //remote data xhr.onerror = function() { //log the error to our Titanium Studio console console.log(this.status + ' - ' + this.statusText); }; //open up the recipes xml feed xhr.open('GET', 'http://rss.allrecipes.com/daily.aspx?hubID=79'); //finally, execute the call to the remote feed xhr.send(); } refresh(); Try running the emulator now for either Android or iPhone. You should see two tabs appear on the screen, as shown in the following screenshot. After a few seconds, there should be a stack of XML data printed to your Appcelerator Studio console log. How it works… If you are already familiar with JavaScript for the Web, this should make a lot of sense to you. Here, we created an HTTPClient using the Ti.Network namespace, and opened a GET connection to the URL of the feed from the recipe's website using an object called xhr. By implementing the onload event listener, we can capture the XML data that has been retrieved by the xhr object. In the source code, you'll notice that we have used console.log() to echo information to the Titanium Studio screen, which is a great way of debugging and following events in our app. If your connection and GET request were successful, you should see a large XML string output in the Titanium Studio console log. The final part of the recipe is small but very important—calling the xhr object's send() method. This kicks off the GET request; without it, your app would never load any data. It is important to note that you'll not receive any errors or warnings if you forget to implement xhr.send(), so if your app is not receiving any data, this is the first place to check. If you are having trouble parsing your XML, always check whether it is valid first! Opening the XML feed in your browser will normally provide you with enough information to determine whether your feed is valid or has broken elements. Displaying data using a TableView TableViews are one of the most commonly used components in Titanium. Almost all of the native apps on your device utilize tables in some shape or form. They are used to display large lists of data in an effective manner, allowing for scrolling lists that can be customized visually, searched through, or drilled down to expose child views. Titanium makes it easy to implement TableViews in your application, so in this recipe, we'll implement a TableView and use our XML data feed from the previous recipe to populate it with a list of recipes. How to do it... Once we have connected our app to a data feed and we're retrieving XML data via the XHR object, we need to be able to manipulate that data and display it in a TableView component. Firstly, we will need to create an array object called data at the top of our refresh function in the recipes.js file; this array will hold all of the information for our TableView in a global context. Then, we need to disseminate the XML, read in the required elements, and populate our data array object, before we finally create a TableView and set the data to be our data array. Replace the refresh function with the following code: function refresh() { var data = []; //empty data array //declare the http client object var xhr = Ti.Network.createHTTPClient(); //create the table view var tblRecipes = Ti.UI.createTableView(); win.add(tblRecipes); //this method will process the remote data xhr.onload = function() { var xml = this.responseXML; //get the item nodelist from our response xml object var items = xml.documentElement.getElementsByTagName("item"); //loop each item in the xml for (var i = 0; i < items.length; i++) { //create a table row var row = Ti.UI.createTableViewRow({ title: items.item(i).getElementsByTagName("title").item(0).text }); //add the table row to our data[] object data.push(row); } //end for loop //finally, set the data property of the tableView to our //data[] object tblRecipes.data = data; }; //open up the recipes xml feed xhr.open('GET', 'http://rss.allrecipes.com/daily.aspx?hubID=79'); //finally, execute the call to the remote feed xhr.send(); } The following screenshot shows the TableView with the titles of our recipes from the XML feed: How it works... The first thing you'll notice is that we are taking the response data, extracting all the elements that match the name item, and assigning it to items. This gives us an array that we can use to loop through and assign each individual item to the data array object that we created earlier. From there, we create our TableView by implementing the Ti.UI.createTableView() function. You should notice almost immediately that many of our regular properties are also used by tables, including width, height, and positioning. In this case, we did not specify these values, which means that by default, the TableView will occupy the screen. A TableView has an extra, and important, property—data. The data property accepts an array of data, the values of which can either be used dynamically (as we have done here with the title property) or be assigned to the subcomponent children of a TableRow. As you begin to build more complex applications, you'll be fully understanding just how flexible table-based layouts can be. Summary In this article, we covered fundamental methods of implementing remote data access over HTTP. As you are a Titanium developer, we had also understand the available methods to build a successful app. More importantly, many services that you may wish to integrate into your app will probably require you to do this at some point or the other, so it is vital to understand and be able to implement remote data feeds and XML Resources for Article: Further resources on this subject: Mobile First Bootstrap [article] Anatomy of a Sprite Kit project [article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 5495
Modal Close icon
Modal Close icon