Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-implementing-openstack-networking-and-security
Packt
05 Feb 2016
8 min read
Save for later

Implementing OpenStack Networking and Security

Packt
05 Feb 2016
8 min read
In this article written by Omar Khedher, author of Mastering OpenStack, we will explore the various aspects of networking and security in OpenStack. A major part of the article is focused on presenting the different security layouts by using Neutron. In this article, we will discuss the following topics: Understanding how Neutron facilitates the network management in OpenStack Using security groups to enforce a security layer for instances The story of an API By analogy, the OpenStack compute service provides an API that provides a virtual server abstraction to imitate the compute resources. The network service and compute service perform in the same way, where we come to a new generation of virtualization in network resources such as network, subnet, and ports, and can be continued in the following schema: Network: As an abstraction for the layer 2 network segmentation that is similar to the VLANs Subnet: This is the associated abstraction layer for a block of IPv4/IPv6 addressing per network Port: This is the associated abstraction layer that is used to attach a virtual NIC of an instance to a network Router: This is an abstraction for layer 3 that is used to perform routing between the networks Floating IP: This is used to perform static public IP mapping from external to internal networks Security groups Imagine a scenario where you have to apply certain traffic management rules for a dozen compute node instances. Therefore, assigning a certain set of rules for a specific group of nodes will be much easier instead of going through each node at a time. Security groups enclose all the aspects of the rules that are applied to the ingoing and outgoing traffic to instances, which includes the following: The source and receiver, which will allow or deny traffic to instances from either the internal OpenStack IP addresses or from the rest of the world Protocols to which the rule will apply, such as TCP, UDP, and ICMP Egress/ingress traffic management to a neutron port In this way, OpenStack offers an additional security layer to the firewall rules that are available on the compute instance. The purpose is to manage traffic to several compute instances from one security group. You should bear in mind that the networking security groups are more granular-traffic-filtering-aware than the compute firewall rules since they are applied on the basis of the port instead of the instance. Eventually, the creation of the network security rules can be done in different ways. For more information on how iptables works on Linux, https://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-iptables.html is a very useful reference. Manage the security groups using Horizon From Horizon, in the Access and Security section, you can add a security group and name it, for example PacktPub_SG. Then, a simple click on Edit Rules will do the trick. The following example illustrates how this network security function can help you understand how traffic—both in ingress/egress—can be controlled: The previous security group contains four rules. The first and the second lines are rules to open all the outgoing traffic for IPv4 and IPv6 respectively. The third line allows the inbound traffic by opening the ICMP port, while the last one opens port 22 for SSH for the inbound interface. You might notice the presence of the CIDR fields, which is essential to know. Based on CIDR, you allow or restrict traffic over the specified port. For example, using CIDR of 0.0.0.0/0 will allow traffic for all the IP addresses over the port that was mentioned in your rule. For example, a CIDR with 32.32.15.5/32 will restrict traffic only to a single host with an IP of 32.32.15.5. If you would like to specify a range of IP in the same subnet, you can use the CIDR notation, 32.32.15.1/24, which will restrict traffic to the IP addresses starting from 32.32.15.*; the other IP addresses will not stick to the latter rule. The naming of the security group must be done with a unique name per project. Manage the security groups using the Neutron CLI The security groups also can be managed by using the Python Neutron command-line interface. Wherever you run the Neutron daemon, you can list, for example, all the present security groups from the command line in the following way: # neutron security-group-list The preceding command yields the following output: To demonstrate how the PacktPub_SG security group rules that were illustrated previously are implemented on the host, we can add a new rule that allows the ingress connections to ping (ICMP) and establish a secure shell connection (SSH) in the following way: # neutron security-group-rule-create --protocol icmp –-direction ingress PacktPub-SG The preceding command produces the following result: The following command line will add a new rule that allows ingress connections to establish a secure shell connection (SSH): # neutron security-group-rule-create --protocol tcp –-port-range-max 22 –-direction ingress PacktPub-SG The preceding command gives the following output: By default, if none of the security groups have been created, the port of instances will be associated within the default security group for any project where all the outbound traffic will be allowed and blocked in the inbound side. You may conclude from the output of the previous command line that it lists the rules that are associated with the current project ID and not by the security groups. Managing the security groups using the Nova CLI The nova command line also does the same trick if you intend to perform the basic security group's control, as follows: $ nova secgroup-list-rules default Since we are setting Neutron as our network service controller, we will proceed by using the networking security groups, which reveals additional traffic control features. If you are still using the compute API to manage the security groups, you can always refer to the nova.conf file for each compute node to set security_group_api = neutron. To associate the security groups to certain running instances, it might possible to use the nova client in the following way: # nova add-secgroup test-vm1 PacktPub_SG The following code illustrates the new association of the packtPub_SG security group with the test-vm1 instance: # nova show test-vm1   The following is the result of the preceding command: One of the best practices to troubleshoot connection issues for the running instances is to start checking the iptables running in the compute node. Eventually, any rule that was added to a security group will be applied to the iptables chains in the compute node. We can check the updated iptables chains in the compute host after applying the security group rules by using the following command: # iptables-save The preceding command yields the following output: The highlighted rules describe the direction of the packet and the rule that is matched. For example, the inbound traffic to the f7fabcce-f interface will be processed by the neutron-openvswi-if7fabcce-f chain. It is important to know how iptables rules work in Linux. Updating the security groups will also perform changes in the iptable chains. Remember that chains are a set of rules that determine how packets should be filtered. Network packets traverse rules in chains, and it is possible to jump to another chain. You can find different chains per compute host, depending on the network filter setup. If you have already created your own security groups, a series of iptables and chains are implemented on every compute node that hosts the instance that is associated within the applied corresponding security group. The following example demonstrates a sample update in the current iptables of a compute node that runs instances within the 10.10.10.0/24 subnet and assigns 10.10.10.2 as a default gateway for the former instances IP ranges: The last rule that was shown in the preceding screenshot dictates how the flow of the traffic leaving the f7fabcce-finterface must be sourced from 10.10.10.2/32 and the FA:16:3E:7E:79:64 MAC address. The former rule is useful when you wish to prevent an instance from issuing a MAC/IP address spoofing. It is possible to test ping and SSH to the instance via the router namespace in the following way: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ping 10.10.10.2 The preceding command provides the following output: The testing of an SSH to the instance can be done by using the sane router namespace, as follows: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ssh cirros@10.10.10.2 The preceding command produces the following output: Web servers DMZ example In the current example, we will show a simple setup of a security group that might be applied to a pool of web servers that are running in the Compute01, Compute02 and Compute03 nodes. We will allow inbound traffic from the Internet to access WebServer01, AppServer01, and DNSServer01 over HTTP and HTTPS. This is depicted in the following diagram: Let's see how we can restrict the traffic ingress/egress via Neutron API: $ neutron security-group-create DMZ_Zone --description "allow web traffic from the Internet" $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 443 --port_range_max 443 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 3306 --port_range_max 53 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 From Horizon, we can see the following security rules group added: To conclude, we have looked at presenting different security layouts by using Neutron. At this point, you should be comfortable with security groups and their use cases. Further your OpenStack knowledge by designing, deploying, and managing a scalable OpenStack infrastructure with Mastering OpenStack
Read more
  • 0
  • 0
  • 5197

article-image-building-custom-heat-resources
John Belamaric
05 Feb 2016
9 min read
Save for later

Building custom Heat resources

John Belamaric
05 Feb 2016
9 min read
OpenStack Heat orchestration makes it easy to build templates for application deployment and auto-scaling. The built-in resource types offer access to many of the existing OpenStack services. However, you may need to integrate with an internal CMDB or service registry, or configure some other services outside of OpenStack, as you launch your application. In this post I will explain how you can add your own custom Heat resources to extend Heat orchestration to meet your needs. As example code, I’ll use the Heat resources we developed at Infoblox, which can be found at http://github.com/infobloxopen/heat-infoblox. In our use case, we have an existing management interface for our DNS services, called the grid. In order to scale up the DNS service, we need to orchestrate the addition of members to our grid by making RESTful APIs calls to the grid master. We built custom Heat resource types to set up the grid to properly configure the new member to serve DNS. These custom resources perform the following operations: Tell the grid master about the new member that will be joining. Configure the networking for one or more interfaces on the member. Configure the licenses for each member. Enable the DNS service on the new member. Configure the “name server group” for the member – that is, configure which zones the member will serve. With these resources, we can scale up the DNS service for particular sets of domains with a simple Heat CLI command, or even auto-scale based upon the load seen on the instances. We use two different resource types for this, with the Infoblox::Grid::Member handling 1-4, and Infoblox::Grid::NameServerGroupMember handling 5. So, what do we need to do to build a Heat resource? First, we need to understand the main features of a resource. From a developer standpoint, each resource consists of a property schema, an attribute schema, a set of lifecycle methods, and a resource identifier. It is important to think about whatever actions you need to take in terms of a resource that can be created, updated, or deleted. That is, the way Heat works is to manage resources; sometimes configuration doesn’t fit neatly into that concept, but you’ll need to find a way to define resources that make sense even so. Properties are the inputs to the resource creation and update processes, and are specified by the user in the template when utilizing the resource. Attributes, on the other hand, are the run-time data values associated with an existing resource. For example, the Infoblox::Grid::Member resource type, defined in the heat_infoblox/resources/grid_member.py file, has properties such as name and port, but its attributes include the user data to inject during Nova boot. That user data is actually generated on the fly by the resource when it is needed. The lifecycle methods are called by Heat to create, update, delete, or validate the resource. This is where all the critical logic resides. The resource identifier is generated by the create method, and is used as the input for the delete method or other methods that operate on an existing resource. Thus, it is critical that the resource id value provides a unique reference to the resource. When building a new resource type, the first thing to do is understand what the critical properties are that the user will need to set. These are defined in the properties_schema (this snippet is from Infoblox::Grid::Member code in the stable/juno branch of the heat-infoblox project; there are some small differences in more recent versions of Heat): properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Member name.'), ), MODEL: properties.Schema( properties.Schema.STRING, _('Infoblox model name.'), constraints=[ constraints.AllowedValues(ALLOWED_MODELS) ] ), LICENSES: properties.Schema( properties.Schema.LIST, _('List of licenses to pre-provision.'), schema=properties.Schema( properties.Schema.STRING ), constraints=[ constraints.AllowedValues(ALLOWED_LICENSES_PRE_PROVISION) ] ), …etc… Each property in turn has its own schema that describes its type, any constraints on the input values, whether the property is required or optional, and a default value if appropriate. In many cases, the property itself may be another dictionary with many different additional options that each in turn have their own schema. Each property or sub-property should also include a clear description. These descriptions are shown to the user in the newer versions of Horizon, and are critical to making the resource type useful. Next, you’ll need to understand what attributes are needed, if any. Attributes aren’t always necessary, but may be needed if the new resource is to be consumed as input to subsequent resources. For example, the Infoblox::Grid::Member resource has a user_data attribute, which is fed into the OS::Nova::Server user_data property when spinning up the Nova instance for the new member. Like properties, attributes are specified with a schema: attributes_schema = { USER_DATA: attributes.Schema( _('User data for the Nova boot process.')), NAME_ATTR: attributes.Schema( _('The member name.')) } In this case, however, the schema is simpler. Since it is essentially just documenting the outputs for use by template authors, there is no need to specify constraints, defaults, or even data types. Like the properties example, the code snippet above is from the Juno version of Heat-Infoblox. The newer version allows you to specify a type, though it is still not required. Finally, you need to specify the lifecycle methods. The handle_create and handle_delete methods are critical and must be implemented. There are a number of other handler methods that can be optionally implemented: handle_update, handle_suspend, and handle_resume are the most commonly implemented. If one of these operations happens asynchronously (such as launching a Nova instance), then you can utilize the handle_<action>_complete method, which will be repeatedly called in a loop until it returns True, after the handle_<action> method is called. Let’s take a closer look at the handle_create method defined by Infoblox::Grid::Member. Here is the complete code of this method: def handle_create(self): mgmt = self._make_port_network_settings(self.MGMT_PORT) lan1 = self._make_port_network_settings(self.LAN1_PORT) lan2 = self._make_port_network_settings(self.LAN2_PORT) name = self.properties[self.NAME] nat = self.properties[self.NAT_IP] self.infoblox().create_member(name=name, mgmt=mgmt, lan1=lan1, lan2=lan2, nat_ip=nat) self.infoblox().pre_provision_member( name, hwmodel=self.properties[self.MODEL], hwtype='IB-VNIOS', licenses=self.properties[self.LICENSES]) dns = self.properties[self.DNS_SETTINGS] if dns: self.infoblox().configure_member_dns( name, enable_dns=dns['enable'] ) self.resource_id_set(name) Breaking this down, we see the first thing it does is convert the properties into a format understood by the Infoblox RESTful API: mgmt = self._make_port_network_settings(self.MGMT_PORT) lan1 = self._make_port_network_settings(self.LAN1_PORT) lan2 = self._make_port_network_settings(self.LAN2_PORT) The _make_port_network_settings here will actually call out to the Neutron API to gather details about the port, and return a JSON structure that represents the configuration of those ports. def _make_port_network_settings(self, port_name): if self.properties[port_name] is None: return None port = self.client('neutron').show_port( self.properties[port_name])['port'] if port is None: return None ipv4 = None ipv6 = None for ip in port['fixed_ips']: if ':' in ip['ip_address'] and ipv6 is None: ipv6 = self._make_ipv6_settings(ip) else: if ipv4 is None: ipv4 = self._make_network_settings(ip) return {'ipv4': ipv4, 'ipv6': ipv6} After that, it calls the methods that interface with the Infoblox API, passing in the properly formatted data that was created based upon the resource properties: name = self.properties[self.NAME] nat = self.properties[self.NAT_IP] self.infoblox().create_member(name=name, mgmt=mgmt, lan1=lan1, lan2=lan2, nat_ip=nat) self.infoblox().pre_provision_member( name, hwmodel=self.properties[self.MODEL], hwtype='IB-VNIOS', licenses=self.properties[self.LICENSES]) dns = self.properties[self.DNS_SETTINGS] if dns: self.infoblox().configure_member_dns( name, enable_dns=dns['enable'] ) Finally, it must set the resource_id value for the resource. This must be unique to the type of resource, so that the handle_delete method will know the appropriate resource to act upon. In our case, the name is sufficient, so we use that: self.resource_id_set(name) Once a resource is created, the template may want to access the attributes we defined for that resource. To make these accessible, you just need to override the _resolve_attribute method, which takes the name of the attribute to resolve. def _resolve_attribute(self, name): member_name = self.resource_id member = self.infoblox().get_member( member_name, return_fields=['host_name', 'vip_setting', 'ipv6_setting'])[0] token = self._get_member_tokens(member) LOG.debug("MEMBER for %s = %s" % (name, member)) if name == self.USER_DATA: return self._make_user_data(member, token) if name == self.NAME_ATTR: return member['host_name'] return None This is called an instance method, so the resource_id is available in the object itself. In our case, we call the Infoblox RESTful API to query for the details about the member referenced in the resource_id, then use that data to generate the attribute requested. That is really all there is to a Heat resource. The short version is: define the resource ID, attributes, and properties then use the properties in the RESTful API calls in the handle_* and _resolve_attribute methods, to manage your custom resource. Continue reading our resources to become an OpenStack master. Next up, how to present different security layouts in Neutron. From 4th to 10th April save 50% on some of our top cloud eBooks. From OpenStack to AWS, we've got a range of titles to help you unlock cloud's transformative potential! Find them all here. About the author John Belamaric is a software and systems architect with nearly 20 years of software design and development experience. His current focus is on cloud network automation. He is a key architect of the Infoblox Cloud products, concentrating on OpenStack integration and development. He brings to this his experience as the lead architect for the Infoblox Network Automation product line, along with a wealth of networking, network management, software, and product design knowledge. He is a contributor to both the OpenStack Neutron and Designate projects. He lives in Bethesda, Maryland with his wife Robin and two children, Owen and Audrey.
Read more
  • 0
  • 0
  • 2299

article-image-introduction-akka
Packt
04 Feb 2016
15 min read
Save for later

Introduction to Akka

Packt
04 Feb 2016
15 min read
In this article written by Prasanna Kumar Sathyanarayanan and Suraj Atreya (the authors of this book, Reactive Programming with Scala and Akka), we will see what the Akka framework is all about in detail. Akka is a framework to write distributed, asynchronous, concurrent, and scalable applications. Akka actors react to the messages that are sent to them. Actors can be viewed as a passive component unless an external message is triggered. They are a higher level abstraction and provide a neat way to handle concurrency instead of a traditional multithreaded application. (For more resources related to this topic, see here.) One of the examples of Akka actor is handling request response in a web server. A web server typically handles millions of requests per second. These requests must be handled concurrently to cater to the user requests. One way is to have a pool of threads and let these threads accept the requests and hand it off to the actual worker threads. In this case, the thread pool has to be managed by the application developer including error handling, thread locking, and synchronization. Most often, the application logic is intertwined with the business logic. In case of the web server, the thread pool has to be manually handled. Whereas, using actors, the thread pool is managed by the Akka engine and actors receive messages asynchronously. Each request can be thought of as a message to the actor, and the actor reacts to the message. Actors are very light weight event-driven processes. Several million actors can exist within a GB of heap memory. Actor mailbox Every actor has a mailbox. Since actors communicate exclusively using messages, every actor maintains a queue of messages called mailbox. Therefore, an actor will read the messages in the order that it was sent. Actor systems An ActorSystem is a heavyweight process which is the first step before creating actors. During initialization, it allocates 1 to N threads per ActorSystem. Before creating an actor, an actor system must be created and this process involves the creation of a hierarchical structure of actors. Since an actor can create other actors, the handling of failure is also a vital part of the Akka engine, which handles it gracefully. This design helps to take action if an actor dies or has an unexpected exception for some reason. When an actor system is first created, three actors are created as shown in the figure: Root guardian (/): The root guardian is the grand parent of all the actors. This actor supervises all actors under the user actors. Since the root guardian is the supervisor of all the actors underneath it, there is no supervisor for the root guardian itself. Root guardian actor is not a real actor and it will terminate the actor system if it finds any throwables from its children. User ( /user): The user guardian actor supervises all the actors that are created by the users. If the user guardian actor terminates, all its children will also terminate. System guardian ( /system): This is a special actor that oversees the orderly shutdown of actors while logging remains active. This is achieved by having the system guardian watch the user guardian and initialize a shut-down sequence when the Terminated message is received. Message passing The figure at the end of this section shows the different steps that are involved when a message is passed to an actor. For example, let's assume there is a pizza website and a customer wants to order some pizzas. For simplicity, let's remove the non-essential details, such as billing and other information, and instead focus on just the order of a pizza. If a customer is some kind of an application (pizza customer) and the one who receives orders is a chef (pizza chef), then each request for a pizza can be illustrated as an asynchronous request to the chef. The figure shows how when a message is passed to an actor, all the different components such as mailbox and dispatcher does its job. Broadly these are explained in the following six steps when a message is passed to the actor: A PizzaCustomer creates and uses the ActorSystem. This is the first step before sending a message to an actor. The PizzaCustomer acquires a PizzaChef. In Akka, an actor is created using the actorOf(...) function call. Akka doesn't return the actual actor but instead returns a reference to the actor reference PizzaChef for safety. The PizzaRequest is sent to this PizzaChef. The PizzaChef sends this message to the Dispatcher. Dispatcher then enqueues the message into the PizzaChef's actor mailbox. Dispatcher then puts the mailbox on the thread. Finally, the mailbox dequeues and sends the message to the PizzaChef receive method. Creating an actor system An actor system is the first thing that should be created before creating actors. The actor system is created using the following API: val system = ActorSystem("Pizza") The string "Pizza" is just a name given to the actor system. Creating an ActorRef The following snippet shows the creation of an actor inside the previously created actor system: val pizzaChef: ActorRef = system.actorOf(Props[PizzaChef]) An actor is created using the actorOf(...) function call. The actorOf() call doesn't return the actor itself, instead returns a reference to the actor. Once an actor's reference is obtained, clients can send messages using the ActorRef. This is a safe way of communicating between actors since the state of the actor itself is not manipulated in any way. Sending a PizzaRequest to the ActorRef Now that we have an actor system and a reference to the actor, we would like to send requests to the pizzaChef actor reference. We send the message to an actor using the ! also called Tell. Here in the code snippet, we Tell the message MarinaraRequest to the pizza ActorRef: pizzaChef ! MarinaraRequest The Tell is also called as fire-forget. There is no acknowledgement returned from a Tell. When the message is sent to an actor, the actor's receive method will receive the message and processes it further. receive is a partial function and has the following signature: def receive: PartialFunction[Any, Unit] The return type receive suggests it is Unit and therefore this function is side effecting. The following code is what we discussed: import akka.actor.{Actor, ActorRef, Props, ActorSystem} sealed trait PizzaRequest case object MarinaraRequest extends PizzaRequest case object MargheritaRequest extends PizzaRequest class PizzaChef extends Actor {   def receive = {     case MarinaraRequest => println("I have a Marinara request!")     case MargheritaRequest => println("I have a Margherita request!")   } } object PizzaCustomer{     def main(args: Array[String]) : Unit = {     val system = ActorSystem("Pizza")     val pizzaChef: ActorRef = system.actorOf(Props[PizzaChef])     pizzaChef ! MarinaraRequest     pizzaChef ! MargheritaRequest   } } The preceding code shows the receive block that handles two kinds of requests; one for MarinaraRequest, and the other for MargheritaRequest. These two requests are defined as case objects. Actor message We saw when a message needs to be sent, ! (Tell) was used. But, we didn't discuss how exactly this message is processed. We will explore how the ideas of dispatcher and execution context are used to carry out the message passing techniques between actors. In the pizza example, we used two kinds of messages: MarinaraRequest and MargheritaRequest. For simplicity, all that these messages did was to print on the console. When PizzaCustomer sent PizzaRequest to PizzaChef ActorRef, the messages are sent to the dispatcher. The dispatcher then sends this message to the corresponding actor's mailbox. Mailbox Every time a new actor is created, a corresponding mailbox is also created. There are exceptions to this rule when multiple actors share a same mailbox. PizzaChef will have a mailbox. This mailbox stores the messages that appear asynchronously in a FIFO manner. Therefore, when a new message is sent to an actor, Akka guarantees that the messages are enqueued and dequeued in a FIFO manner. Here is the signature of the mailbox from the Akka source: It can be found at https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala private[akka] abstract class Mailbox(val messageQueue: MessageQueue)     extends ForkJoinTask[Unit]       with SystemMessageQueue with Runnable From this signature, we can see that Mailbox takes a MessageQueue as an input. Also, we can see that it extends Runnable and suggests that Mailbox is a thread. We will see why the Mailbox is a thread, in a bit. Dispatcher Dispatchers dispatch actors to threads. There is no one-to-one mapping between actors and threads. If that was the case, then the whole system would crumble under its own weight. Also, the amount of context switching would be much more than the actual work. Therefore, it is important to understand that creating a number of actors is not equal to creating the same number of threads. The main objective of dispatcher is to coordinate between the actors and its messages to the underlying threads. A dispatcher picks the actor next in the queue based on the dispatcher policy and the actor's message in the queue. These two are then passed on to one of the available threads in the execution context. To illustrate this point, let's see the code snippet: protected[akka] override   def registerForExecution(mbox: Mailbox, ...): Boolean = {   ...      if (mbox.setAsScheduled()) {      try {          executorService execute mbox          true       }   } This code snippet shows us that the dispatcher accepts a Mailbox as a parameter and has ExecutorService wrapped around to execute the mailbox. We saw that Mailbox is a thread and the dispatcher executes this Mailbox against this ExecutorService. When the mailbox's run method is triggered, it dequeues a message from Mailbox and passes it to the actor for processing. This is the code snippet of run from Mailbox.scala from the Akka source code: The source code can be found at https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala override final def run(): Unit = {     try {       if (!isClosed) { //Volatile read, needed here         processAllSystemMessages() //First, deal with any system messages         processMailbox() //Then deal with messages       }     } finally {       setAsIdle() //Volatile write, needed here       dispatcher.registerForExecution(this, false, false)     }   } Actor Path The interesting thing about actor system is that actors are created in a hierarchical manner. All the user created actors are created under the /user actor. The actor path looks very similar to the UNIX file system hierarchy, for example, /home/akka/akka_book. We will see how this is similar to the Akka path in the following code example. Let's take our pizza example, and let's add a few toppings on the pizza. So, whenever a customer gives MarinaraRequest, he will get extra cheese too: class PizzaToppings extends Actor{   def receive = {     case ExtraCheese => println("Aye! Extra cheese it is")     case Jalapeno => println("More Jalapenos!")   } } class PizzaSupervisor extends Actor {   val pizzaToppings =         context.actorOf(Props[PizzaToppings], "PizzaToppings")   def receive = {     case MarinaraRequest   =>       println("I have a Marinara request with extra cheese!")       println(pizzaToppings.path)       pizzaToppings ! ExtraCheese     case MargheritaRequest =>       println("I have a Margherita request!")     case PizzaException    =>      throw new Exception("Pizza fried!")   } } The PizzaSupervisor class is very similar to our earlier example of pizza actor. However, if you observe carefully, there is another actor created within this PizzaSupervisor called the PizzaToppings actor. This PizzaToppings is created using context.actorOf(...) instead of using system.actorOf(...). Therefore, PizzaToppings will become the child of PizzaSupervisor. The actor path of PizzaSupervisor will look like this: akka://Pizza/user/PizzaSupervisor The actor path for PizzaToppings will look like this: akka://Pizza/user/PizzaSupervisor/PizzaToppings When this main program is run, the actor system is created using system.actorOf(...) and prints the path of the pizza actor system and its corresponding child as shown previously: object TestActorPath {   def main(args: Array[String]): Unit = {     val system = ActorSystem("Pizza")     val pizza: ActorRef =  system.actorOf(Props[PizzaSupervisor],                    "PizzaSupervisor")     println(pizza.path)     pizza ! MarinaraRequest     system.shutdown()   } } The following is the output:akka://Pizza/user/PizzaSupervisorI have a Marinara request with extra cheese!akka://Pizza/user/PizzaSupervisor/PizzaToppingsAye! Extra cheese it is The name akka in the actor path is fixed and the actors created are shown under the user. If you remember from earlier discussions, all the user-created actors are created under the user guardian. Therefore, the actor path shows that it is a user-created actor. The name pizza is the name we gave to the actor system while it was being created. Therefore, the hierarchy explains that pizza is the actor system and all the actors are children below it. In the following figure, we can clearly see the actor hierarchy and its actor path: Actor lifecycle Akka actors have a life cycle that is very useful for writing a bug free concurrent code. Akka follows a philosophy of Let it crash and it is assumed that actors too can crash. But if an actor crashes, several actions can be taken including restarting it. As usual let's look at our pizza baking process. As before, we will have an actor to accept the pizza requests. But, this time we will see the workflow of the pizza baking process! Using this example, we will see how the actor life cycle works: class PizzaLifeCycle extends Actor with ActorLogging {   override def preStart() = {     log.info("Pizza request received!")   }   def receive = {     case MarinaraRequest   => log.info("I have a Marinara request!")     case MargheritaRequest => log.info("I have a Margherita request!")     case PizzaException    => throw new Exception("Pizza fried!")   }  //Old actor instance   override def preRestart(reason: Throwable, message: Option[Any]) = {     log.info("Pizza baking restarted because " + reason.getMessage)     postStop()   }   //New actor instance   override def postRestart(reason: Throwable) = {     log.info("New Pizza process started because earlier " + reason.getMessage)     preStart()   }   override def postStop() = {     log.info("Pizza request finished")   } } The PizzaLifeCycle actor takes pizza requests but with additional states. An actor can go through many different states during its lifetime. Let's send some messages to find out what happens with our PizzaLifeCycle actor and how it behaves:     pizza ! MarinaraRequest         pizza ! PizzaException         pizza ! MargheritaRequest Here is the output for the preceding requests sent:Pizza request received!I have a Marinara request!Pizza fried!java.lang.Exception: Pizza fried!at PizzaLifeCycle$$anonfun$receive$1.applyOrElse(PizzaLifeCycle.scala:12)at akka.actor.Actor$class.aroundReceive(Actor.scala:467)at PizzaLifeCycle.aroundReceive(PizzaLifeCycle.scala:3)at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)Pizza baking restarted because Pizza fried!Pizza request finishedNew Pizza process started because Pizza fried!Pizza request received!I have a Margherita request! When we sent our first MarinaraRequest request, we see the following in the log we received: Pizza request received! I have a Marinara request! Akka called the preStart() method and then entered the receive block. Then, we simulated an exception by sending PizzaException and as expected, we got an exception: Pizza fried!java.lang.Exception: Pizza fried!  at PizzaLifeCycle$$anonfun$receive$1.applyOrElse(PizzaLifeCycle.scala:12)  at akka.actor.Actor$class.aroundReceive(Actor.scala:467)  at PizzaLifeCycle.aroundReceive(PizzaLifeCycle.scala:3)  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)Pizza baking restarted because Pizza fried!Pizza request finished There are some interesting things to note here. Although we got an exception Pizza fried!, we also got two other log messages. The reason for this is quite simple. When we have an exception, Akka called preRestart(). During preRestart(), is called on the old instance of the actor and have a chance to clean-up some of the resources here. But in our example, we just called postStop(). During preRestart(), the old instance prepares to handoff to the new actor instance. Finally, we sent another request called MargheritaRequest. Here, we get these log messages: New Pizza process started because Pizza fried!Pizza request received!I have a Margherita request! We saw that the old instance actor was stopped. Here, the requests are handled by a new actor instance. The postRestart()is now called on the new actor instance, which calls preStart() to resume normal operations of our pizza baking process. During the preRestart() and postRestart() methods, we got a reason as to why the old actor died. Summary In this article, you learned about the details of the Akka framework, the actor mailbox, actor systems, how to create an actor system and ActorRef, how to send PizzaRequest to ActorRef, and so on. Resources for Article:   Further resources on this subject: The Design Patterns Out There and Setting Up Your Environment [article] Creating First Akka Application [article] Content-based recommendation [article]
Read more
  • 0
  • 0
  • 2567

article-image-alfresco-platform
Packt
04 Feb 2016
8 min read
Save for later

The Alfresco Platform

Packt
04 Feb 2016
8 min read
In this article by Jeffrey T. Potts and Snehal K Shah, authors of Alfresco Developer Guide, Second Edition, we will discuss the Alfresco architecture. (For more resources related to this topic, see here.) Alfresco architecture Many of Alfresco's competitors, particularly in the closed-source space, have sprawling footprints composed of multiple, sometimes competing, technologies that have been acquired and integrated over time. Some have undergone massive infrastructure overhauls over the years, resulting in bizarre vestigial tails of sorts. Luckily, Alfresco doesn't suffer from these traits. On the contrary, Alfresco's architecture is advantageous for the following reasons: It is relatively straightforward It is built with state of the art frameworks and open source components It supports several important content management and related standards Let's look at each of these characteristics, starting with a high-level look at the Alfresco architecture. High-level architecture The following diagram shows Alfresco's high-level architecture: The important takeaways at this point are as follows: There are many ways to get content into or out of a repository, whether it's via the protocols on the left-hand side of the diagram or the APIs on the right-hand side. Alfresco runs as a web application within a servlet container. In the current release, the web client runs in the same process as the content repository itself. Customizations and extensions run as part of the Alfresco web application. An extension mechanism separates customizations from the core product to keep the path clear for future upgrades. Metadata resides in a relational database, while content files and Solr/Lucene indexes reside on the filesystem. The diagram shows the content residing on the same physical filesystem as Alfresco but other types of file storage could be used as well. The WCM Virtualization Server is an instance of Tomcat with Alfresco configurations and JAR files. The Virtualization Server is used to serve live previews of the website[SC2]  as it is being worked on. It can run on the same physical machine as Alfresco, or it can be split to show a separate node as well. Add-ons Add-ons are pieces of functionality not found in the core Alfresco distribution. If you are working with a binary distribution, this means that you'll have additional files to download and install on top of the base Alfresco installation. Add-ons are provided by Alfresco, third-party software vendors, and members of the Alfresco community, such as partners and customers. Alfresco makes several add-on modules available for the taking, such as Records Management and Facebook integration. Kofax, a software vendor, provides add-on software that integrates Alfresco with Kofax imaging solutions. Members of the Alfresco community create and share add-on modules via the Alfresco Forge, a website that Alfresco has set up for this purpose. However, a majority of what is available on it are language packs that are used to localize the Alfresco web client. Open source components One of the reasons Alfresco has been able to create a viable offering so quickly is because they didn't start it from scratch. Alfresco's engineers assembled the product from many finer-grained open source components. Instead of reinventing the wheel, they used proven components. This saved them time, of course, but it also resulted in a more robust, standards-based product. It also eases the transition for people who are new to the platform. If a developer already knows JavaServer Faces or Spring, for example, many of the customization concepts are going to be familiar to them. The following table lists some of the major open source components used to build Alfresco: Open source components What the open source component does in Alfresco Apache Lucene (http://lucene.apache.org/) Full text and metadata search. Apache Solr (http://lucene.apache.org/solr/) Alfresco allow the usage of the Solr index instead of Lucene. Hibernate (http://www.hibernate.org/) iBatis (http://ibatis.apache.org/) Database persistence. Both are supported by Alfresco. Apache MyFaces (http://myfaces.apache.org/) JavaServer Faces components in the web client . FreeMarker (http://freemarker.org/) Web script framework views, custom views in the web client, Web client dashlets, and e-mail templates. Mozilla Rhino JavaScript Engine (http://www.mozilla.org/rhino/) Web script framework controllers, server-side JavaScript, and actions OpenSymphony Quartz (http://www.opensymphony.com/quartz/) Scheduling of asynchronous processes. Spring ACEGI (http://www.acegisecurity.org/) Security (authorization), roles, and permissions. Apache Axis (http://ws.apache.org/axis/) Web services. OpenOffice.org (http://www.openoffice.org/) Conversion of office documents to the PDF format. Apache FOP (http://xmlgraphics.apache.org/fop/) Transformation of XSL:FO to the PDF format. Apache POI (http://poi.apache.org/) Metadata extraction from Microsoft Office files. JBoss jBPM (http://www.jboss.com/products/jbpm) Advanced workflow. Activiti (http://activiti.org/) Advanced workflow. ImageMagick (http://www.imagemagick.org) Image file manipulation. Chiba (http://chiba.sourceforge.net/) Web form generation that's based on XForms. Spring Surf (http://www.springsurf.org/) This is used by Alfresco Share.   Developers looking to contribute significant product enhancements to Alfresco or those making major, deep customizations to the product may require experience with a particular component, depending on exactly what they are trying to do. Everyone else will be able to customize and extend Alfresco using basic Java and web application development skills. Major standards and protocols[SC6] that are supported Software vendors love buzzwords. As new acronyms climb the hype cycle, vendors scramble to figure out how they can at least appear to support the standard or protocol so that prospective clients can check that box on the[SC7]  RFP (don't even get me started on RFPs). Commercial open source vendors are still software vendors and are thus no less guilty of this practice. But because open source software is developed in the open by a community of developers, its compliance to standards tends to be more genuine. It makes more sense for an open source project to implement a standard than go off in some new direction because it saves time, promotes interoperability with other open source projects, and stays true to what open source is all about—freedom and choice. Here, then, are the significant standards and protocols that Alfresco supports: Standard/porotocol Comment FTP Content can be contributed to the repository via FTP. Secure FTP is not yet supported. WebDAV   WebDAV is an HTTP-based protocol that's commonly supported by content management vendors and is one way to make the repository look like a filesystem. CIFS CIFS lets the repository be mounted as a shared drive by other machines. As opposed to WebDAV, systems (and people) can't tell the difference between an Alfresco repository mounted as a shared drive through CIFS and a traditional file server. The JCR API (JSR-170) JCR is a Java API that works with content repositories, such as Alfresco. Alfresco is a JCR-compliant repository. There are two levels of JCR compliance. Alfresco is level-1-compliant and almost level-2-compliant. The Portlet API (JSR-168) The Web Script Framework lets you define a RESTful API for the repository. Web Scripts can return XML, HTML, JSON, and JSR-168 portlets. In the current release, this requires the portal and Alfresco to run in the same JVM, but this restriction may go away in the near future. SOAP The Alfresco Web Services API uses SOAP-based web services. OpenSearch (http://www.opensearch.org) Alfresco repositories can be configured in the form of an OpenSearch data source, which allows Alfresco to participate in federated search queries. OpenSearch queries can be executed from the Web Client as well. This means that if your organization has several repositories that are OpenSearch-compliant (including non-Alfresco repositories), they can be searched from within the web client. XForms and XML Schema Web forms are defined using XML Schema. Not all XForms widgets are supported. XSLT and XSL:FO Web form data can be transformed using XSL 1.0. LDAP Alfresco can authenticate against an LDAP directory or a Microsoft Active Directory server. Summary In this article, we took a look at how Alfresco could be assembled with open source components, run as a web application within an application server, and exposed the repository through many different protocols and APIs. Alfresco can also be customized. We explored how some types of customization are very basic (more configuration than customization) and can be performed by end users through the web client. Others are more advanced and require coding. Advanced customizations is the subject of this book. Resources for Article:   Further resources on this subject: Core Ephesoft Features [article] Introducing Liferay for Your Intranet [article] Alfresco Web Scrpits [article]
Read more
  • 0
  • 0
  • 6231

article-image-going-mobile-first
Packt
03 Feb 2016
16 min read
Save for later

Going Mobile First

Packt
03 Feb 2016
16 min read
In this article by Silvio Moreto Pererira, author of the book, Bootstrap By Example, we will focus on mobile design and how to change the page layout for different viewports, change the content, and more. In this article, you will learn the following: Mobile first development Debugging for any device The Bootstrap grid system for different resolutions (For more resources related to this topic, see here.) Make it greater Maybe you have asked yourself (or even searched for) the reason of the mobile first paradigm movement. It is simple and makes complete sense to speed up your development pace. The main argument for the mobile first paradigm is that it is easier to make it greater than to shrink it. In other words, if you first make a desktop version of the web page (known as responsive design or mobile last) and then go to adjust the website for a mobile, it has a probability of 99% of breaking the layout at some point, and you will have to fix a lot of things in both the mobile and desktop. On the other hand, if you first create the mobile version, naturally the website will use (or show) less content than the desktop version. So, it will be easier to just add the content, place the things in the right places, and create the full responsiveness stack. The following image tries to illustrate this concept. Going mobile last, you will get a degraded, warped, and crappy layout, and going mobile first, you will get a progressively enhanced, future-friendly awesome web page. See what happens to the poor elephant in this metaphor: Bootstrap and themobile first design At the beginning of Bootstrap, there was no concept of mobile first, so it was made to work for designing responsive web pages. However, with the version 3 of the framework, the concept of mobile first was very solid in the community. For doing this, the whole code of the scaffolding system was rewritten to become mobile first from the start. They decided to reformulate how to set up the grid instead of just adding mobile styles. This made a great impact on compatibility between versions older than 3, but was crucial for making the framework even more popular. To ensure the proper rendering of the page, set the correct viewport at the <head> tag: <meta name="viewport" content="width=device-width,   initial-scale=1"> How to debug different viewports in the browser Here, you will learn how to debug different viewports using the Google Chrome web browser. If you already know this, you can skip this section, although it might be important to refresh the steps for this. In the Google Chrome browser, open the Developer tools option. There are many ways to open this menu: Right-click anywhere on the page and click on the last option called Inspect. Go in the settings (the sandwich button on the right-hand side of the address bar), click on More tools, and finally on Developer tools. The shortcut to open it is Ctrl (cmd for OS X users) + Shift + I. F12 in Windows also works (Internet Explorer legacy…). With Developer tools, click on the mobile phone to the left of a magnifier, as showed in the following image: It will change the display of the viewport to a certain device, and you can also set a specific network usage to limit the data bandwidth. Chrome will show a message telling you that for proper visualization, you may need to reload the page to get the correct rendering: For the next image, we have activated the Device mode for an iPhone 5 device. When we set this viewport, the problems start to appear because we did not make the web page with the mobile first methodology. Bootstrap scaffolding for different devices Now that we know more about mobile first development and its important role in Bootstrap starting from version 3, we will cover Bootstrap usage for different devices and viewports. For doing this, we must apply the column class for the specific viewport, for example, for medium displays, we use the .col-md-*class. The following table presents the different classes and resolutions that will be applied for each viewport class:   Extra small devices (Phones < 544 px / 34 em) Small devices (Tablets ≥ 544 px / 34 em and < 768 px / 48 em) Medium devices (Desktops ≥ 768 px /48 em < 900px / 62 em) Large devices (Desktops ≥ 900 px / 62 em < 1200px 75 em) Extra large devices (Desktops ≥ 1200 px / 75 em) Grid behavior Horizontal lines at all times Collapse at start and fit column grid Container fixed width Auto 544px or 34rem 750px or 45rem 970px or 60rem 1170px or 72.25rem Class prefix .col-xs-* .col-sm-* .col-md-* .col-lg-* .col-xl-* Number of columns 12 columns Column fixed width Auto ~ 44px or 2.75 rem ~ 62px or 3.86 rem ~ 81px or 5.06 rem ~ 97px or 6.06 rem Mobile and extra small devices To exemplify the usage of Bootstrap scaffolding in mobile devices, we will have a predefined web page and want to adapt it to mobile devices. We will be using the Chrome mobile debug tool with the device, iPhone 5. You may have noticed that for small devices, Bootstrap just stacks each column without referring for different rows. In the layout, some of the Bootstrap rows may seem fine in this visualization, although the one in the following image is a bit strange as the portion of code and image are not in the same line, as it supposed to be: To fix this, we need to add the class column's prefix for extra small devices, which is .col-xs-*, where * is the size of the column from 1 to 12. Add the .col-xs-5 class and .col-xs-7 for the columns of this respective row. Refresh the page and you will see now how the columns are put side by side: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive">   </div> </div> Although the image of the web browser is too small on the right, it would be better if it was a vertical stretched image, such as a mobile phone. (What a coincidence!) To make this, we need to hide the browser image in extra small devices and display an image of the mobile phone. Add the new mobile image below the existing one as follows. You will see both images stacked up vertically in the right column: <img src="imgs/center.png" class="img-responsive"> <img src="imgs/mobile.png" class="img-responsive"> Then, we need to use the new concept of availability classes present in Bootstrap. We need to hide the browser image and display the mobile image just for this kind of viewport, which is extra small. For this, add the .hidden-xs class to the browser image and add the .visible-xs class to the mobile image: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive hidden-xs">     <img src="imgs/mobile.png" class="img-responsive visible-xs">   </div> </div> Now this row seems nice! With this, the browser image was hidden in extra small devices and the mobile image was shown for this viewport in question. The following image shows the final display of this row: Moving on, the next Bootstrap .row contains a testimonial surrounded by two images. It would be nicer if the testimonial appeared first and both images were displayed after it, splitting the same row, as shown in the following image. For this, we will repeat almost the same techniques presented in the last example: The first change is to hide the Bootstrap image using the .hidden-xs class. After this, create another image tag with the Bootstrap image in the same column of the PACKT image. The final code of the row should be as follows: <div class="row">   <div class="col-md-3 hidden-xs">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 visible-xs">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> We did plenty of things now; all the changes are highlighted. The first is the .hidden-xs class in the first column of the Bootstrap image, which hid the column for this viewport. Afterward, in the testimonial, we changed the grid for the mobile, adding a column offset with size 1 and making the testimonial fill the rest of the row with the .col-xs-11 class. Lastly, like we said, we want to split both images from PACKT and Bootstrap in the same row. For this, make the first image column fill seven columns with the .col-xs-7 class. The other image column is a little more complicated. As it is visible just for mobile devices, we add the .col-xs-5 class, which will make the element span five columns in extra small devices. Moreover, we hide the column for other viewports with the .visible-xs class. As you can see, this row has more than 12 columns (one offset, 11 testimonials, seven PACKT images, and five Bootstrap images). This process is called column wrapping and happens when you put more than 12 columns in the same row, so the groups of extra columns will wrap to the next lines. Availability classes Just like .hidden-*, there are the .visible-*-*classes for each variation of the display and column from 1 to 12. There is also a way to change the display of the CSS property using the .visible-*-* class, where the last * means block, inline, or inline-block. Use this to set the proper visualization for different visualizations. The following image shows you the final result of the changes. Note that we made the testimonial appear first, with one column of offset, and both images appear below it: Tablets and small devices Completing the mobile visualization devices, we move on to tablets and small devices, which are devices from 544 px (34 em) to 768 px (48 em). Most of this kind of devices are tablets or old desktops monitors. To work with this example, we are using the iPad mini in the portrait position. For this resolution, Bootstrap handles the rows just as in extra small devices by stacking up each one of the columns and making them fill the total width of the page. So, if we do not want this to happen, we have to set the column fill for each element with the .col-sm-* class manually. If you see now how our example is presented, there are two main problems. The first one is that the heading is in separate lines, whereas they could be in the same line. For this, we just need to apply the grid classes for small devices with the .col-sm-6 class for each column, splitting them into equal sizes: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> The result should be as follows: The second problem in this viewport is again the testimonial row! Due to the classes that we have added for the mobile viewport, the testimonial now has an offset column and different column span. We must add the classes for small devices and make this row with the Bootstrap image on the left, the testimonial in the middle, and the PACKT image on the right: <div class="row">   <div class="col-md-3 hidden-xs col-sm-3">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11     col-sm-6 col-sm-offset-0">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7 col-sm-3">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 hidden-sm hidden-md hidden-lg">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> As you can see, we had to reset the column offset in the testimonial column. It happened because it kept the offset that we had added for extra small devices. Moreover, we are just ensuring that the image columns had to fill just three columns with the .col-sm-3 classes in both the images. The result of the row is as follows: Everything else seems fine! These viewports were easier to set up. See how Bootstrap helps us a lot? Let's move on to the final viewport, which is a desktop or large devices. Desktop and large devices Last but not least, we enter the grid layout for a desktop and large devices. We skipped medium devices because we coded first for this viewport. Deactivate the Device mode in Chrome and put your page in a viewport with a width larger or equal to 1200 pixels. The grid prefix that we will be using is .col-lg-*, and if you take a look at the page, you will see that everything is well placed and we don't need to make changes! (Although we would like to make some tweaks to make our layout fancier and learn some stuff about the Bootstrap grid.) We want to talk about a thing called column ordering. It is possible to change the order of the columns in the same row by applying the.col-lg-push-* and .col-lg-pull-* classes. (Note that we are using the large devices prefix, but any other grid class prefix can be used.) The .col-lg-push-* class means that the column will be pushed to the right by the * columns, where * is the number of columns pushed. On the other hand, .col-lg-pull-* will pull the column in the left direction by *, where * is the number of columns as well. Let's test this trick in the second row by changing the order of both the columns: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6 col-lg-push-4">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6 col-lg-pull-4">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> We just added the .col-lg-push-4 class to the first column and .col-lg-pull-4 to the other one to get this result. By doing this, we have changed the order of both the columns in the second row, as shown in the following image: Summary In this article, you learned a little about the mobile first development and how Bootstrap can help us in this task. We started from an existing Bootstrap template, which was not ready for mobile visualization, and fixed that. While fixing, we used a lot of Bootstrap scaffolding properties and Bootstrap helpers, making it much easier to fix anything. We did all of this without a single line of CSS or JavaScript; we used only Bootstrap with its inner powers! Resources for Article:   Further resources on this subject: Bootstrap in a Box [article] The Bootstrap grid system [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 8945

article-image-coreos-networking-and-flannel-internals
Packt
03 Feb 2016
8 min read
Save for later

CoreOS Networking and Flannel Internals

Packt
03 Feb 2016
8 min read
In this article by Sreenivas Makam, author of the book Mastering CoreOS explains how microservices has increased the need to have lots of containers and also connectivity between containers across hosts. It is necessary to have a robust Container networking scheme to achieve this goal. This article will cover the basics of Container networking with a focus on how CoreOS does Container networking with Flannel. (For more resources related to this topic, see here.) Container networking basics The following are the reasons why we need Container networking: Containers need to talk to the external world. Containers should be reachable from the external world so that the external world can use the services that Containers provide. Containers need to talk to the host machine. An example can be sharing volumes. There should be inter-container connectivity in the same host and across hosts. An example is a WordPress container in one host talking to a MySQL container in another host. Multiple solutions are currently available to interconnect Containers. These solutions are pretty new and actively under development. Docker, until release 1.8, did not have a native solution to interconnect Containers across hosts. Docker release 1.9 introduced a Libnetwork-based solution to interconnect containers across hosts as well as do service discovery. CoreOS is using Flannel for container networking in CoreOS clusters. There are projects such as Weave and Calico that are developing Container networking solutions, and they plan to be a networking container plugin for any Container runtime such as Docker or Rkt. Flannel Flannel is an open source project that provides a Container networking solution for CoreOS clusters. Flannel can also be used for non-CoreOS clusters. Kubernetes uses Flannel to set up networking between the Kubernetes pods. Flannel allocates a separate subnet for every host where a Container runs, and the Containers in this host get allocated an individual IP address from the host subnet. An overlay network is set up between each host that allows Containers on different hosts to talk to each other. In Chapter 1, CoreOS Overview covered an overview of the Flannel control and data path. This section will delve into the Flannel internals. Manual installation Flannel can be installed manually or using the systemd unit, flanneld.service. The following command will install flannel in the CoreOS node using a container to build the flannel binary. The flanneld Flannel binary will be available in /home/core/flannel/bin after executing the following commands: git clone https://github.com/coreos/flannel.git docker run -v /home/core/flannel:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build" The following is the Flannel version after we build flannel in our CoreOS node: Installation using flanneld.service Flannel is not installed by default in CoreOS. This is done to keep the CoreOS image size to a minimum. Docker requires flannel to configure the network and flannel requires docker to download the flannel container. To avoid this chicken-and-egg problem, early-docker.service is started by default in CoreOS, whose primary purpose is to download the flannel container and start it. A regular docker.service starts the Docker daemon with the flannel network. The following image shows you the sequence in flanneld.service, where early Docker daemon starts the flannel container, which, in turn starts docker.service with the subnet created by flannel: The following is the relevant section of flanneld.service that downloads the flannel container from the Quay repository: The following output shows the early docker's running containers. Early-docker will manage Flannel only: The following is the relevant section of flanneld.service that updates the docker options to use the subnet created by flannel: The following is the content of flannel_docker_opts.env—in my case—after flannel was started. The address, 10.1.60.1/24, is chosen by this CoreOS node for its containers: Docker will be started as part of docker.service, as shown in the following image, with the preceding environment file: Control path There is no central controller in flannel, and it uses etcd for internode communication. Each node in the CoreOS cluster runs a flannel agent and they communicate with each other using etcd. As part of starting the Flannel service, we specify the Flannel subnet that can be used by the individual nodes in the network. This subnet is registered with etcd so that every CoreOS node in the cluster can see it. Each node in the network picks a particular subnet range and registers atomically with etcd. The following is the relevant section of cloud-config that starts flanneld.service along with specifying the configuration for Flannel. Here, we have specified the subnet to be used for flannel as 10.1.0.0/16 along with the encapsulation type as vxlan: The preceding configuration will create the following etcd key as seen in the node. This shows that 10.1.0.0/16 is allocated for flannel to be used across the CoreOS cluster and that the encapsulation type is vxlan: Once each node gets a subnet, containers started in this node will get an IP address from the IP address pool allocated to the node. The following is the etcd subnet allocation per node. As we can see, all the subnets are in the 10.1.0.0/16 range that was configured earlier with etcd and with a 24-bit mask. The subnet length per host can also be controlled as a flannel configuration option: Let's look at ifconfig of the Flannel interface created in this node. The IP address is in the address range of 10.1.0.0/16: Data path Flannel uses the Linux bridge to encapsulate the packets using an overlay protocol specified in the Flannel configuration. This allows for connectivity between containers in the same host as well as across hosts. The following are the major backends currently supported by Flannel and specified in the JSON configuration file. The JSON configuration file can be specified in the Flannel section of cloud-config: UDP: In UDP encapsulation, packets from containers are encapsulated in UDP with the default port number 8285. We can change the port number if needed. VXLAN: From an encapsulation overhead perspective, VXLAN is efficient when compared to UDP. By default, port 8472 is used for VXLAN encapsulation. If we want to use an IANA-allocated VXLAN port, we need to specify the port field as 4789. AWS-VPC: This is applicable to using Flannel in the AWS VPC cloud. Instead of encapsulating the packets using an overlay, this approach uses a VPC route table to communicate across containers. AWS limits each VPC route table entry to 50, so this can become a problem with bigger clusters. The following is an example of specifying the AWS type in the flannel configuration: GCE: This is applicable to using Flannel in the GCE cloud. Instead of encapsulating the packets using an overlay, this approach uses the GCE route table to communicate across containers. GCE limits each VPC route table entry to 100, so this can become a problem with bigger clusters. The following is an example of specifying the GCE type in the Flannel configuration: Let's create containers in two different hosts with a VXLAN encapsulation and check whether the connectivity is fine. The following example uses a Vagrant CoreOS cluster with the Flannel service enabled. Host 1: Let's start a busybox container: Let's check the IP address allotted to the container. This IP address comes from the IP pool allocated to this CoreOS node by the flannel agent. 10.1.19.0/24 was allocated to host 1 and this container got the 10.1.19.2 address: Host 2: Let's start a busybox container: Let's check the IP address allotted to this container. This IP address comes from the IP pool allocated to this CoreOS node by the flannel agent. 10.1.1.0/24 was allocated to host 2 and this container got the 10.1.1.2 address: The following output shows you the ping being successful between container 1 and container 2. This ping packet is travelling across the two CoreOS nodes and is encapsulated using VXLAN: Flannel as a CNI plugin As explained in Chapter 1, CoreOS Overview, APPC defines a Container specification that any Container runtime can use. For Container networking, APPC defines a Container Network Interface (CNI) specification. With CNI, the Container networking functionality can be implemented as a plugin. CNI expects plugins to support APIs with a set of parameters and the implementation is left to the plugin. Example APIs add a container to a network and remove the container from the network with a defined parameter list. This allows the implementation of network plugins by different vendors and also the reuse of plugins across different Container runtimes. The following image shows the relationship between the RKT container runtime, CNI layer, and Plugin like Flannel. The IPAM Plugin is used to allocate an IP address to the containers and this is nested inside the initial networking plugin: Summary In this chapter, we covered different Container networking technologies with a focus on Container networking in CoreOS. There are many companies trying to solve this Container networking problem. Resources for Article: Further resources on this subject: Network and Data Management for Containers [article] Deploying a Play application on CoreOS and Docker [article] CoreOS – Overview and Installation [article]
Read more
  • 0
  • 0
  • 3537
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-customizing-ipython
Packt
03 Feb 2016
9 min read
Save for later

Customizing IPython

Packt
03 Feb 2016
9 min read
In this article written by Cyrille Rossant, author of Learning IPython for Interactive Computing and Data Visualization - Second edition, we look at how the Jupyter Notebook is a highly-customizable platform. You can configure many aspects of the software and can extend the backend (kernels) and the frontend (HTML-based Notebook). This allows you to create highly-personalized user experiences based on the Notebook. In this article, we will cover the following topics: Creating a custom magic command in an IPython extension Writing a new Jupyter kernel Customizing the Notebook interface with JavaScript Creating a custom magic command in an IPython extension IPython comes with a rich set of magic commands. You can get the complete list with the %lsmagic command. IPython also allows you to create your own magic commands. In this section, we will create a new cell magic that compiles and executes C++ code in the Notebook. We first import the register_cell_magic function: In [1]: from IPython.core.magic import register_cell_magic To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic, as shown here: In [2]: @register_cell_magic def cpp(line, cell): """Compile, execute C++ code, and return the standard output.""" # We first retrieve the current IPython interpreter # instance. ip = get_ipython() # We define the source and executable filenames. source_filename = '_temp.cpp' program_filename = '_temp' # We write the code to the C++ file. with open(source_filename, 'w') as f: f.write(cell) # We compile the C++ code into an executable. compile = ip.getoutput("g++ {0:s} -o {1:s}".format( source_filename, program_filename)) # We execute the executable and return the output. output = ip.getoutput('./{0:s}'.format(program_filename)) print('n'.join(output)) C++ compiler This recipe requires the gcc C++ compiler. On Ubuntu, type sudo apt-get install build-essential in a terminal. On OS X, install Xcode. On Windows, install MinGW (http://www.mingw.org) and make sure that g++ is in your system path. This magic command uses the getoutput() method of the IPython InteractiveShell instance. This object represents the current interactive session. It defines many methods for interacting with the session. You will find the comprehensive list at http://ipython.org/ipython-doc/dev/api/generated/IPython.core.interactiveshell.html#IPython.core.interactiveshell.InteractiveShell. Let's now try this new cell magic. In [3]: %%cpp #include<iostream> int main() { std::cout << "Hello world!"; } Out[3]: Hello world! This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython. To create an IPython extension, copy the definition of the cpp() function (without the decorator) to a Python module, named cpp_ext.py for example. Then, add the following at the end of the file: def load_ipython_extension(ipython): """This function is called when the extension is loaded. It accepts an IPython InteractiveShell instance. We can register the magic with the `register_magic_function` method of the shell instance.""" ipython.register_magic_function(cpp, 'cell') Then, you can load the extension with %load_ext cpp_ext. The cpp_ext.py file needs to be in the PYTHONPATH, for example in the current directory. Writing a new Jupyter kernel Jupyter supports a wide variety of kernels written in many languages, including the most-frequently used IPython. The Notebook interface lets you choose the kernel for every notebook. This information is stored within each notebook file. The jupyter kernelspec command allows you to get information about the kernels. For example, jupyter kernelspec list lists the installed kernels. Type jupyter kernelspec --help for more information. At the end of this section, you will find references with instructions to install various kernels including IR, IJulia, and IHaskell. Here, we will detail how to create a custom kernel. There are two methods to create a new kernel: Writing a kernel from scratch for a new language by re-implementing the whole Jupyter messaging protocol. Writing a wrapper kernel for a language that can be accessed from Python. We will use the second, easier method in this section. Specifically, we will reuse the example from the last section to write a C++ wrapper kernel. We need to slightly refactor the last section's code because we won't have access to the InteractiveShell instance. Since we're creating a kernel, we need to put the code in a Python script in a new folder named cpp: In [1]: %mkdir cpp The %%writefile cell magic lets us create a cpp_kernel.py Python script from the Notebook: In [2]: %%writefile cpp/cpp_kernel.py import os import os.path as op import tempfile # We import the `getoutput()` function provided by IPython. # It allows us to do system calls from Python. from IPython.utils.process import getoutput def exec_cpp(code): """Compile, execute C++ code, and return the standard output.""" # We create a temporary directory. This directory will # be deleted at the end of the 'with' context. # All created files will be in this directory. with tempfile.TemporaryDirectory() as tmpdir: # We define the source and executable filenames. source_path = op.join(tmpdir, 'temp.cpp') program_path = op.join(tmpdir, 'temp') # We write the code to the C++ file. with open(source_path, 'w') as f: f.write(code) # We compile the C++ code into an executable. os.system("g++ {0:s} -o {1:s}".format( source_path, program_path)) # We execute the program and return the output. return getoutput(program_path) Out[2]: Writing cpp/cpp_kernel.py Now we create our wrapper kernel by appending some code to the cpp_kernel.py file created above (that's what the -a option in the %%writefile cell magic is for): In [3]: %%writefile -a cpp/cpp_kernel.py """C++ wrapper kernel.""" from ipykernel.kernelbase import Kernel class CppKernel(Kernel): # Kernel information. implementation = 'C++' implementation_version = '1.0' language = 'c++' language_version = '1.0' language_info = {'name': 'c++', 'mimetype': 'text/plain'} banner = "C++ kernel" def do_execute(self, code, silent, store_history=True, user_expressions=None, allow_stdin=False): """This function is called when a code cell is executed.""" if not silent: # We run the C++ code and get the output. output = exec_cpp(code) # We send back the result to the frontend. stream_content = {'name': 'stdout', 'text': output} self.send_response(self.iopub_socket, 'stream', stream_content) return {'status': 'ok', # The base class increments the execution # count 'execution_count': self.execution_count, 'payload': [], 'user_expressions': {}, } if __name__ == '__main__': from ipykernel.kernelapp import IPKernelApp IPKernelApp.launch_instance(kernel_class=CppKernel) Out[3]: Appending to cpp/cpp_kernel.py In production code, it would be best to test the compilation and execution, and to fail gracefully by showing an error. See the references at the end of this section for more information. Our wrapper kernel is now implemented in cpp/cpp_kernel.py. The next step is to create a cpp/kernel.json file describing our kernel: In [4]: %%writefile cpp/kernel.json { "argv": ["python", "cpp/cpp_kernel.py", "-f", "{connection_file}" ], "display_name": "C++" } Out[4]: Writing cpp/kernel.json The argv field describes the command that is used to launch a C++ kernel. More information can be found in the references below. Finally, let's install this kernel with the following command: In [5]: !jupyter kernelspec install --replace --user cpp Out[5]: [InstallKernelSpec] Installed kernelspec cpp in /Users/cyrille/Library/Jupyter/kernels/cpp The --replace option forces the installation even if the kernel already exists. The --user option serves to install the kernel in the user directory. We can test the installation of the kernel with the following command: In [6]: !jupyter kernelspec list Out[6]: Available kernels: cpp python3 Now, C++ notebooks can be created in the Notebook, as shown in the following screenshot: C++ kernel in the Notebook Finally, wrapper kernels can also be used in the IPython terminal or the Qt console, using the --kernel option, for example ipython console --kernel cpp. Here are a few references: Kernel documentation at http://jupyter-client.readthedocs.org/en/latest/kernels.html Wrapper kernels at http://jupyter-client.readthedocs.org/en/latest/wrapperkernels.html List of kernels at https://github.com/ipython/ipython/wiki/IPython%20kernels%20for%20other%20languages bash kernel at https://github.com/takluyver/bash_kernel R kernel at https://github.com/takluyver/IRkernel Julia kernel at https://github.com/JuliaLang/IJulia.jl Haskell kernel at https://github.com/gibiansky/IHaskell Customizing the Notebook interface with JavaScript The Notebook application exposes a JavaScript API that allows for a high level of customization. In this section, we will create a new button in the Notebook toolbar to renumber the cells. The JavaScript API is not stable and not well-documented. Although the example in this section has been tested with IPython 4.0, nothing guarantees that it will work in future versions without changes. The commented JavaScript code below adds a new Renumber button. In [1]: %%javascript // This function allows us to add buttons // to the Notebook toolbar. IPython.toolbar.add_buttons_group([ { // The button's label. 'label': 'Renumber all code cells', // The button's icon. // See a list of Font-Awesome icons here: // http://fortawesome.github.io/Font-Awesome/icons/ 'icon': 'fa-list-ol', // The callback function called when the button is // pressed. 'callback': function () { // We retrieve the lists of all cells. var cells = IPython.notebook.get_cells(); // We only keep the code cells. cells = cells.filter(function(c) { return c instanceof IPython.CodeCell; }) // We set the input prompt of all code cells. for (var i = 0; i < cells.length; i++) { cells[i].set_input_prompt(i + 1); } } }]); Executing this cell displays a new button in the Notebook toolbar, as shown in the following screenshot: Adding a new button in the Notebook toolbar You can use the jupyter nbextension command to install notebook extensions (use the --help option to see the list of possible commands). Here are a few repositories with custom JavaScript extensions contributed by the community: https://github.com/minrk/ipython_extensions https://github.com/ipython-contrib/IPython-notebook-extensions So, we have covered several customization options of IPython and the Jupyter Notebook, but there’s so much more that can be done. Take a look at the IPython Interactive Computing and Visualization Cookbook to learn how to create your own custom widgets in the Notebook.
Read more
  • 0
  • 0
  • 4994

article-image-gradient-descent-work
Packt
03 Feb 2016
11 min read
Save for later

Gradient Descent at Work

Packt
03 Feb 2016
11 min read
In this article by Alberto Boschetti and Luca Massaron authors of book Regression Analysis with Python, we will learn about gradient descent, its feature scaling and a simple implementation. (For more resources related to this topic, see here.) As an alternative from the usual classical optimization algorithms, the gradient descent technique is able to minimize the cost function of a linear regression analysis using much less computations. In terms of complexity, gradient descent ranks in the order O(n*p), thus making learning regression coefficients feasible even in the occurrence of a large n (that stands for the number of observations) and large p (number of variables). The method works by leveraging a simple heuristic that gradually converges to the optimal solution starting from a random one. Explaining it using simple words, it resembles walking blind in the mountains. If you want to descend to the lowest valley, even if you don't know and can't see the path, you can proceed approximately by going downhill for a while, then stopping, then directing downhill again, and so on, always directing at each stage where the surface descends until you arrive at a point when you cannot descend anymore. Hopefully, at that point, you will have reached your destination. In such a situation, your only risk is to pass by an intermediate valley (where there is a wood or a lake for instance) and mistake it for your desired arrival because the land stops descending there. In an optimization process, such a situation is defined as a local minimum (whereas your target is a global minimum, instead of the best minimum possible) and it is a possible outcome of your journey downhill depending on the function you are working on minimizing. The good news, in any case, is that the error function of the linear model family is a bowl-shaped one (technically, our cost function is a concave one) and it is unlikely that you can get stuck anywhere if you properly descend. The necessary steps to work out a gradient-descent-based solution are hereby described. Given our cost function for a set of coefficients (the vector w): We first start by choosing a random initialization for w by choosing some random numbers (taken from a standardized normal curve, for instance, having zero mean and unit variance). Then, we start reiterating an update of the values of w (opportunely using the gradient descent computations) until the marginal improvement from the previous J(w) is small enough to let us figure out that we have finally reached an optimum minimum. We can opportunely update our coefficients, separately one by one, by subtracting from each of them a portion alpha (α, the learning rate) of the partial derivative of the cost function: Here, in our formula, wj is to be intended as a single coefficient (we are iterating over them). After resolving the partial derivative, the final resolution form is: Simplifying everything, our gradient for the coefficient of x is just the average of our predicted values multiplied by their respective x value. We have to notice that by introducing more parameters to be estimated during the optimization procedure, we are actually introducing more dimensions to our line of fit (turning it into a hyperplane, a multidimensional surface) and such dimensions have certain communalities and differences to be taken into account. Alpha, called the learning rate, is very important in the process, because if it is too large, it may cause the optimization to detour and fail. You have to think of each gradient as a jump or as a run in a direction. If you fully take it, you may happen to pass over the optimum minimum and end up in another rising slope. Too many consecutive long steps may even force you to climb up the cost slope, worsening your initial position (given by a cost function that is its summed square, the loss of an overall score of fitness). Using a small alpha, the gradient descent won't jump beyond the solution, but it may take much longer to reach the desired minimum. How to choose the right alpha is a matter of trial and error. Anyway, starting from an alpha, such as 0.01, is never a bad choice based on our experience in many optimization problems. Naturally, the gradient, given the same alpha, will in any case produce shorter steps as you approach the solution. Visualizing the steps in a graph can really give you a hint about whether the gradient descent is working out a solution or not. Though quite conceptually simple (it is based on an intuition that we surely applied ourselves to move step by step where we can optimizing our result), gradient descent is very effective and indeed scalable when working with real data. Such interesting characteristics elevated it to be the core optimization algorithm in machine learning, not being limited to just the linear model's family, but also, for instance, extended to neural networks for the process of back propagation that updates all the weights of the neural net in order to minimize the training errors. Surprisingly, the gradient descent is also at the core of another complex machine learning algorithm, the gradient boosting tree ensembles, where we have an iterative process minimizing the errors using a simpler learning algorithm (a so-called weak learner because it is limited by an high bias) for progressing toward the optimization. Scikit-learn linear_regression and other linear models present in the linear methods module are actually powered by gradient descent, making Scikit-learn our favorite choice while working on data science projects with large and big data. Feature scaling While using the classical statistical approach, not the machine learning one, working with multiple features requires attention while estimating the coefficients because of their similarities that can cause a variance inflection of the estimates. Moreover, multicollinearity between variables also bears other drawbacks because it can render very difficult, if not impossible to achieve, matrix inversions, the matrix operation at the core of the normal equation coefficient estimation (and such a problem is due to the mathematical limitation of the algorithm). Gradient descent, instead, is not affected at all by reciprocal correlation, allowing the estimation of reliable coefficients even in the presence of perfect collinearity. Anyway, though being quite resistant to the problems that affect other approaches, gradient descent's simplicity renders it vulnerable to other common problems, such as the different scale present in each feature. In fact, some features in your data may be represented by the measurements in units, some others in decimals, and others in thousands, depending on what aspect of reality each feature represents. For instance, in the dataset we decide to take as an example, the Boston houses dataset (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html), a feature is the average number of rooms (a float ranging from about 5 to over 8), others are the percentage of certain pollutants in the air (float between 0 and 1), and so on, mixing very different measurements. When it is the case that the features have a different scale, though the algorithm will be processing each of them separately, the optimization will be dominated by the variables with the more extensive scale. Working in a space of dissimilar dimensions will require more iterations before convergence to a solution (and sometimes, there could be no convergence at all). The remedy is very easy; it is just necessary to put all the features on the same scale. Such an operation is called feature scaling. Feature scaling can be achieved through standardization or normalization. Normalization rescales all the values in the interval between zero and one (usually, but different ranges are also possible), whereas standardization operates removing the mean and dividing by the standard deviation to obtain a unit variance. In our case, standardization is preferable both because it easily permits retuning the obtained standardized coefficients into their original scale and because, centering all the features at the zero mean, it makes the error surface more tractable by many machine learning algorithms, in a much more effective way than just rescaling the maximum and minimum of a variable. An important reminder while applying feature scaling is that changing the scale of the features implies that you will have to use rescaled features also for predictions. A simple implementation Let's try the algorithm first using the standardization based on the Scikit-learn preprocessing module: import numpy as np import random from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression   boston = load_boston() standardization = StandardScaler() y = boston.target X = np.column_stack((standardization.fit_transform(boston.data), np.ones(len(y)))) In the preceding code, we just standardized the variables using the StandardScaler class from Scikit-learn. This class can fit a data matrix, record its column means and standard deviations, and operate a transformation on itself as well as on any other similar matrixes, standardizing the column data. By means of this method, after fitting, we keep a track of the means and standard deviations that have been used because they will come handy if afterwards we will have to recalculate the coefficients using the original scale. Now, we just record a few functions for the following computations: def random_w(p):     return np.array([np.random.normal() for j in range(p)])   def hypothesis(X, w):     return np.dot(X,w)   def loss(X, w, y):     return hypothesis(X, w) - y   def squared_loss(X, w, y):     return loss(X, w, y)**2   def gradient(X, w, y):     gradients = list()     n = float(len(y))     for j in range(len(w)):         gradients.append(np.sum(loss(X, w, y) * X[:,j]) / n)     return gradients   def update(X, w, y, alpha=0.01):     return [t - alpha*g for t, g in zip(w, gradient(X, w, y))]   def optimize(X, y, alpha=0.01, eta = 10**-12, iterations = 1000):     w = random_w(X.shape[1])     for k in range(iterations):         SSL = np.sum(squared_loss(X,w,y))         new_w = update(X,w,y, alpha=alpha)         new_SSL = np.sum(squared_loss(X,new_w,y))         w = new_w         if k>=5 and (new_SSL - SSL <= eta and new_SSL - SSL >= -eta):             return w     return w We can now calculate our regression coefficients: w = optimize(X, y, alpha = 0.02, eta = 10**-12, iterations = 20000) print ("Our standardized coefficients: " +   ', '.join(map(lambda x: "%0.4f" % x, w))) Our standardized coefficients: -0.9204, 1.0810, 0.1430, 0.6822, -2.0601, 2.6706, 0.0211, -3.1044, 2.6588, -2.0759, -2.0622, 0.8566, -3.7487, 22.5328 A simple comparison with Scikit-learn's solution can prove if our code worked fine: sk=LinearRegression().fit(X[:,:-1],y) w_sk = list(sk.coef_) + [sk.intercept_] print ("Scikit-learn's standardized coefficients: " + ', '.join(map(lambda x: "%0.4f" % x, w_sk))) Scikit-learn's standardized coefficients: -0.9204, 1.0810, 0.1430, 0.6822, -2.0601, 2.6706, 0.0211, -3.1044, 2.6588, -2.0759, -2.0622, 0.8566, -3.7487, 22.5328 A noticeable particular to mention is our choice of alpha. After some tests, the value of 0.02 has been chosen for its good performance on this very specific problem. Alpha is the learning rate and, during optimization, it can be fixed or changed according to a line search method, modifying its value in order to minimize the cost function at each single step of the optimization process. In our example, we opted for a fixed learning rate and we had to look for its best value by trying a few optimization values and deciding on which minimized the cost in the minor number of iterations. Summary In this article we learned about gradient descent, its feature scaling and a simple implementation using an algorithm based on Scikit-learn preprocessing module. Resources for Article:   Further resources on this subject: Optimization Techniques [article] Saving Time and Memory [article] Making Your Data Everything It Can Be [article]
Read more
  • 0
  • 0
  • 3884

article-image-getting-started-jupyter-notebook-part-1
Marin Gilles
02 Feb 2016
5 min read
Save for later

Getting started with the Jupyter notebook (part 1)

Marin Gilles
02 Feb 2016
5 min read
The Jupyter notebook (previously known as IPython notebooks) is an interactive notebook, in which you can run code from more than 40 programming languages. In this introduction, we will explore the main features of the Jupyter notebook and see why it can be such a poweful tool for anyone wanting to create beautiful interactive documents and educational resources. To start this working with the notebook, you will need to install it. You can find the full procedure on the Jupyter website. jupyter notebook You will see something similar to the following displayed : [I 20:06:36.367 NotebookApp] Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret [I 20:06:36.813 NotebookApp] Serving notebooks from local directory: /home/your_username [I 20:06:36.813 NotebookApp] 0 active kernels [I 20:06:36.813 NotebookApp] The IPython Notebook is running at: http://localhost:8888/ [I 20:06:36.813 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). And the main Jupyter window should open in the folder you started the notebook (usually your user folder). The main window looks like this : To create a new notebook, simply click on New and choose the kind of notebook you wish to start under the notebooks section. I only have a Python kernel installed locally, so I will start a Python notebook to start working with. A new tab opens, and I get the notebook interface, completely empty. You can see different parts of the notebook: Name of your notebook Main toolbar, with options to save your notebook, export, reload, un notebook[RB1] , restart kernel, etc. Shortcuts The main part of the notebook, containing the contents of your notebook Take the time to explore the menus and see your options. If you need help on a very specific subject, concerning the notebook or some libraries, you can try the help menu at the right end of the menu bar. In the main area, you can see what is called a cell. Each notebook is composed of multiple cells, and each cell will be used for a different purpose. The first cell we have here, starting with In [ ] is a code cell. In this type of cell, you can type any code and execute it. For example, try typing 1 + 2 then hit Shift + Enter. When hitting Shift + Enter, the code in the cell will be evaluated, you will be placed in a new cell, and you will get the following: You can easily identify the current working cell thanks to the green outline. Let's type something else in the second cell, for example: for i in range(5): print(i) When evaluating this cell, you then get: As previously, the code is evaluated and the results are dislayed properly. You may notice there is no Out[2] this time. This is because we printed the results, and no value was returned. One very interesting feature of the notebook is that you can go back in a cell, change it and reevaluate it again, thus updating your whole document. Try this by going back to the first cell, changing 1 + 2 to 2 + 3, and reevaulating the cell, by pressing Shift + Enter. You will notice the result was updated to 5 as soon as you evaluated the cell. This can be very powerful when you want to explore data or test an equation with different parameters without having to reevaluate your whole script. You can, however, reevaluate the whole notebook at once, by going to Cell -> Run all. Now that we’ve seen how to enter code, why not try and get a more beautiful and explanatory notebook? To do this, we will use other types of cells, the Header and Markdown cells. First, let's add a title to our notebook at the very top. To do that, select the first cell, then click Insert -> Insert cell above. As you can see, a new cell was added at the very top of your document. However, this looks exactly like the previous one. Let's make it a title cell by clicking on the cell type menu, in the shortcut toolbar: You then change it to Heading. A pop-up will be displayed explaining how to create different levels of titles, and you will be left with a different type of cell: This cell starts with a # sign, meaning this is a level one title. If you want to make subtitles, you can just use the following notation (explained in the pop-up showing up when changing the cell type): # : First level title ## : Second level title ### : Third level title ... Write your title after the #, then evaluate the cell. You will see the display changing to show a very nice looking title. I added a few other title cells as an example, and an exercise for you: After adding our titles, let's add a few explanations about what we do in each code cell. For this, we will add a cell where we want it to be, and then change its type to Markdown. Then, evaluate your cell. That's it: your text is displayed beautifully! To finish this first introduction, you can rename your notebook by going to File -> Rename and inputing the new name of your notebook. It will then be displayed on the top left of your window, next to the Jupyter logo. In the next part of this introduction, we will go deeper in the capabilities of the notebook and the integration with other Python libraries. Click here to carry on reading now! About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or IPython.
Read more
  • 0
  • 0
  • 33905

article-image-protocol-extensions
Packt
02 Feb 2016
7 min read
Save for later

Protocol Extensions

Packt
02 Feb 2016
7 min read
In this article by John Hoffman, the author of Protocol Oriented Programming with Swift, you will study the types of protocols that can be extended. Protocol extensions can be used to provide common functionality to all the types that conform to a particular protocol. This gives us the ability to add functionality to any type that conforms to a protocol, rather than adding the functionality to each individual type or though a global function. Protocol extensions, like regular extensions, also give us the ability to add functionality to types that we do not have the source code for. (For more resources related to this topic, see here.) Protocol-Oriented programming would not be possible without protocol extensions. Without protocol extensions, if we wanted to add specific functionality to a group of types that conformed to a protocol, we would have to add the functionality to each of the types. If we were using reference types (classes), we could create a class hierarchy, but this is not possible for value types. Apple has stated that we should prefer value types to reference types, and with protocol-extensions, we have the ability to add common functionality to a group of value and/or reference types that conform to a specific protocol without having to implement that functionality in all the types. Let's take a look at what protocol extensions can do for us. The Swift standard library provides a protocol named CollectionType (documentation here). This protocol inherits from the Indexable and SequenceType protocols and is adopted by all of Swift's standard collection types such as Dictionary and Array. Let's say that we want to add the functionality to types that conform to CollectionType. These would shuffle the items in a collection or return only the items whose index number is an even number. We could very easily add this functionality by extending the CollectionType protocol, as shown in the following code: extension CollectionType { func evenElements() -> [Generator.Element] { var index = self.startIndex var result: [Generator.Element] = [] var i = 0 repeat { if i % 2 == 0 { result.append(self[index]) } index = index.successor() i++ } while (index != self.endIndex) return result } func shuffle() -> [Self.Generator.Element] { return sort(){ left, right in return arc4random() < arc4random() } } } Notice that when we extend a protocol, we use the same syntax and format that we use when we extend other types. We use the extension keyword, followed by the name of the protocol that we extend. We then put the functionality that we add to the protocol between curly brackets. Now, every type that conforms to the CollectionType protocol will receive both the evenElements() and shuffle() functions. The following code shows how we can use these functions with an array: var origArray = [1,2,3,4,5,6,7,8,9,10] var newArray = origArray.evenElements() var ranArray = origArray.shuffle() In the previous code, the newArray array would contain the elements 1, 3, 5, 7, and 9 because these elements have even index numbers (we are looking at the index number, not the value of the element). The ranArray array would contain the same elements as origArray, but the order will be shuffled. Protocol extensions are great to add functionality to a group of types without the need to add the code to each of the individual types; however, it is important to know what types conform to the protocol we extend. In the previous example, we extended the CollectionType protocol by adding the evenElements() and shuffle() methods to all the types that conform to the protocol. One of the types that conform to this protocol is the Dictionary type; however, the Dictionary type is an unordered collection. Therefore, the evenElements() method will not work as expected. The following example illustrates this: var origDict = [1:"One",2:"Two",3:"Three",4:"Four"] var returnElements = origDict.evenElements() for item in returnElements { print(item) } Since the Dictionary type does not promise to store the items in the dictionary in any particular order, any of the two items could be printed to the screen in this example. The following shows one possible output from this code: (2, "two") (1, "One") Another problem is that anyone who is not familiar with how the evenElements() method is implemented may expect the returnElements instance to be a dictionary. This is because the original collection is a dictionary type; however, it is actually an instance of the Array type. This can cause some confusion; therefore, we need to be carful when we extend a protocol to make sure that the functionality we add works as expected for the types that conform to the protocol. In the case of the shuffle() and evenElements() methods, we may have been better served if we added the functionality as an extension directly to the Array type, rather than the CollectionType protocol; however, there is another way. We can add constraints to our extension that will limit the types that receive the functionality defined in an extension. In order for a type to receive the functionality defined in a protocol extension, it must satisfy all the constraints defined within the protocol extension. A constraint is added after the name of the protocol that we extend using the where keyword. The following code shows how we could add a constraint to our CollectionType extension: extension CollectionType where Self: ArrayLiteralConvertible { //Extension code here } In the CollectionType protocol extensions, as shown in the previous example, only types that also conform to the ArrayLiteralConvertible protocol will receive the functionality defined in the extension. Since the Dictionary type does not conform to the ArrayLiteralConvertible protocol, it will not receive the functionality defined within the protocol. We could also use constraints to define that our CollectionType protocol extensions only apply to a collection whose elements conform to a specific protocol. In the next example, we use constraints to make sure that the elements in the collection conform to the Comparable protocol. This may be necessary if the functionality that we add relies on the ability to compare two or more elements in the collection. We could add the constraint like this: extension CollectionType where Generator.Element: Comparable { // Add functionality here } Constraints give us the ability to limit which types receive the functionality defined in the extension. One thing that we need to be careful of is using protocol extensions when we should actually be extending an individual type. Protocol extensions should be used when we want to add a functionality to a group of types. If we try to add the functionality to a single type, we should look at extending this individual type. We created a series of protocols that defined the Tae Kwon Do testing areas. Let's take a look at how we can extend the TKDRank protocol from this example to add the ability to store which testing areas the student passed and the areas in which they failed. The following code is for the original TKDRank protocol: protocol TKDRank { var color: TKDBeltColors {get} var rank: TKDColorRank {get} } We will begin by adding an instance of the Dictionary type to our protocol. This dictionary type will store the results of our tests. The following example shows what the new TKDRank protocol will look like: protocol TKDRank { var color: TKDBeltColors {get} var rank: TKDColorRank {get} var passFailTests: [String:Bool] {get set} } We can now extend the TKDRank protocol to add a method that we can use to set in instances where the student passes or fails individual tests. The following code shows how we can do this: extension TKDRank { mutating func setPassFail(testName: String, pass: Bool) { passFailTests[testName] = pass } Now, any type that conforms to the TKDRank protocol will have the setPassFail() method automatically. Since we have seen how to use extensions and protocol extensions, let's take a look at a real-world example. In this example, we will explore ways in which we can create a text validation framework. Summary In this article, we looked at an extension. In the original version of Swift, we were able to use extensions to extend structures, classes, and enumerations, but starting with Swift 2, we are able to use extensions extend protocols as well. Without protocol extensions, protocol-oriented programming would not be possible, but we need to make sure that we use protocol extensions where appropriate, and do not try to use them in place of regular extensions. Resources for Article: Further resources on this subject: The Swift Programming Language [article] Your First Swift App [article] Using Protocols and Protocol Extensions [article]
Read more
  • 0
  • 0
  • 2166
article-image-getting-started-jupyter-notebook-part-2
Marin Gilles
02 Feb 2016
5 min read
Save for later

Getting started with the Jupyter notebook (part 2)

Marin Gilles
02 Feb 2016
5 min read
As seen in the first part of this introduction, you can do a lot of things with the basic capabilities of the Jupyter notebook. But it offers even more possibilities and options, allowing users to create beautiful, interactive documents. Cells manipulation When writing your notebook, you will want to use more advanced cell manipulation. Thankfully, the notebook allows you to manipulate a variety of operations on your cells. You can delete a cell by selecting the desired cell, then going to Edit -> Delete cell, you can move cells by going to Edit -> Move cell [up | down], or you can cut the cell and paste it by going Edit -> Cut Cell then Edit -> Paste Cell ..., selecting the pasting style you need. You can also merge cells, by going to Edit -> Merge cell [above|below], if you find that you have so many cells that you execute only once, or if you want a big chunk of code to be executed in a single sweep. Keep these commands in mind when writing your notebook -- they will save you a lot of time. Markdown cells advanced usage Let's start by exploring the markdown cell type a little more. Even though it says markdown, this type of cell also accepts HTML code. Using this, you can create more advanced styling within your cell, add images and so on. For example, if you want to add the Jupyter logo to your notebook, with a size of 100px by 100px, on the left of the cell: <img src="http://blog.jupyter.org/content/images/2015/02/jupyter-sq-text.png" style="width:100px;height:100px;float:left"> This provides the following after the cell evaluation: Let’s end with the capabilities of the markdown cells: they also support LaTeX syntax. Write your equations in a markdown cell, evaluate the cell, and look at the result. By evaluating this equation: $$int_0^{+infty} x^2 dx$$ You get the LaTeX equation: Export capabilities Another powerful feature of the notebook is the export capability. Indeed, you can write your notebook (an illustrated coding course, for example) and export it in multiple formats such as: HTML Markdown ReST PDF (Through LaTeX) Raw Python By exporting to PDF, you can then create a beautiful document using LaTeX without even using LaTeX! Or you can publish your notebook as a page on your personnal website. You can even write documentation for libraries by exporting to ReST. Matplotlib integration If you have ever done plotting using Python, then you probably know about matplotlib. Matplotlib is a Python library used to create beautiful plots. And it really shines when used with the Jupyter notebook. Let's start exploring what can be done with it. To get started using matplotlib in the Jupyter notebook, you need to tell Jupyter to get all images generated by matplotlib and include them in the notebook. To do that, you just evaluate: %matplotlib inline It might take a few seconds to run, but you only need to do this once when you start your notebook. Let's make a plot and see how this integration works: import matplotlib.pyplot as plt import numpy as np x = np.arange(20) y = x**2 plt.plot(x, y) This simple code will just plot the equation y=x^2. When you evaluate the cell, you get: As you can see here, the plot was added directly into the notebook, just after the code. We can then change our code, reevaluate, and the image will be updated on the fly. This is a nice feature for every data scientist wanting to have their code and images in the same file, letting them know which code does what exactly. Being able to add some more text to the document is also a great help. Non local kernel The Jupyter notebook is built in such a way that it is very easy to start Jupyter from a computer, and allow multiple people to connect to the same Jupyter instance through network. Did you notice, in the previous part of this introduction, the following sentence during Jupyter startup: The IPython Notebook is running at: http://localhost:8888/ This means that your notebook is running locally, on your computer, and that you can access it through a browser at the address http://localhost:8888/. It is possible to make the notebook publicly available by changing the configuration. This will allow anyone with the address to connect to this notebook and make modifications to the notebooks remotely, through their Internet browser. The end word As we have seen in these two parts, the Jupyter notebook is a very powerful tool, allowing users to create beautiful documents for data exploration, education, documentation, and, actually, everything you can think of. Don't hesitate to explore more of its possibilities, and give feedback to the developers if you ever run into trouble, or even if you just want to thank them. About the author Marin Gilles is a PhD student in Physics in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or IPython.
Read more
  • 0
  • 3
  • 7170

article-image-android-and-ios-apps-testing-glance
Packt
02 Feb 2016
21 min read
Save for later

Android and iOS Apps Testing at a Glance

Packt
02 Feb 2016
21 min read
In this article by Vijay Velu, the author of Mobile Application Penetration Testing, we will discuss the current state of mobile application security and the approach to testing for vulnerabilities in mobile devices. We will see the major players in the smartphone OS market and how attackers target users through apps. We will deep-dive into the architecture of Android and iOS to understand the platforms and its current security state, focusing specifically on the various vulnerabilities that affect apps. We will have a look at the Open Web Application Security Project (OWASP) standard to classify these vulnerabilities. The readers will also get an opportunity to practice the security testing of these vulnerabilities via the means of readily available vulnerable mobile applications. The article will have a look at the step-by-step setup of the environment that's required to carry out security testing of mobile applications for Android and iOS. We will also explore the threats that may arise due to potential vulnerabilities and learn how to classify them according to their risks. (For more resources related to this topic, see here.) Smartphones' market share Understanding smartphones' market share will give us a clear picture about what cyber criminals are after and also what could be potentially targeted. The mobile application developers can propose and publish their applications on the stores and be rewarded by a revenue share of the selling price. The following screenshot that was taken from www.idc.com provides us with the overall smartphone OS market in 2015: Since mobile applications are platform-specific, majority of the software vendors are forced to develop applications for all the available operating systems. Android operating system Android is an open source, Linux-based operating system for mobile devices (smartphones and tablet computers). It was developed by the Open Handset Alliance, which was led by Google and other companies. Android OS is Linux-based. It can be programmed in C/C++, but most of the application development is done in Java (Java accesses C libraries via JNI, which is short for Java Native Interface). iPhone operating system (iOS) It was developed by Apple Inc. It was originally released in 2007 for iPhone, iPod Touch, and Apple TV. Apple's mobile version of the OS X operating system that's used in Apple computers is iOS. Berkeley Software Distribution (BSD) is UNIX-based and can be programmed in Objective C. Public Android and iOS vulnerabilities Before we proceed with different types of vulnerabilities on Android and iOS, this section introduces you to Android and iOS as an operating system and covers various fundamental concepts that need to be understood to gain experience in mobile application security. The following table comprises year-wise operating system releases: Year Android iOS 2007/2008 1.0 iPhone OS 1 iPhone OS 2 2009 1.1 iPhone OS 3 1.5 (Cupcake) 2.0 (Eclair) 2.0.1(Eclair) 2010 2.1 (Eclair) iOS 4 2.2 (Froyo) 2.3-2.3.2(Gingerbread) 2011 2.3.4-2.3.7 (Gingerbread) iOS 5 3.0 (HoneyComb) 3.1 (HoneyComb) 3.2 (HoneyComb) 4.0-4.0.2 (Ice Cream Sandwich) 4.0.3-4.0.4 (Ice Cream Sandwich) 2012 4.1 (Jelly Bean) iOS 6 4.2 (Jelly Bean) 2013 4.3 (Jelly bean) iOS 7 4.4 (KitKat) 2014 5.0 (Lollipop) iOS 8 5.1 (Lollipop) 2015   iOS 9 (beta) An interesting research conducted by Hewlett Packard (HP), a software giant that tested more than 2,000 mobile applications from more than 600 companies, has reported the following statistics (for more information, visit http://www8.hp.com/h20195/V2/GetPDF.aspx/4AA5-1057ENW.pdf): 97% of the applications that were tested access at least one private information source of these applications 86% of the applications failed to use simple binary-hardening protections against modern-day attacks 75% of the applications do not use proper encryption techniques when storing data on a mobile device 71% of the vulnerabilities resided on the web server 18% of the applications sent usernames and password over HTTP (of the remaining 85%, 18% implemented SSL/HTTPS incorrectly) So, the key vulnerabilities to mobile applications arise due to a lack of security awareness, "usability versus security trade-off" by developers, excessive application permissions, and a lack of privacy concerns. Coupling this with a lack of sufficient application documentation leads to vulnerabilities that developers are not aware of. Usability versus security trade-off For every developer, it would not be possible to provide users with an application with high security and usability. Making any application secure and usable takes a lot of effort and analytical thinking. Mobile application vulnerabilities are broadly categorized as follows: Insecure transmission of data: Either an application does not enforce any kind of encryption for data in transit on a transport layer, or the implemented encryption is insecure. Insecure data storage: Apps may store data either in a cleartext or obfuscated format, or hard-coded keys in the mobile device. An example e-mail exchange server configuration on Android device that uses an e-mail client stores the username and password in cleartext format, which is easy to reverse by any attacker if the device is rooted. Lack of binary protection: Apps do not enforce any anti-reversing, debugging techniques. Client-side vulnerabilities: Apps do not sanitize data provided from the client side, leading to multiple client-side injection attacks such as cross-site scripting, JavaScript injection, and so on. Hard-coded passwords/keys: Apps may be designed in such a way that hard-coded passwords or private keys are stored on the device storage. Leakage of private information: Apps may unintentionally leak private information. This could be due to the use of a particular framework and obscurity assumptions of developers. Android vulnerabilities In July 2015, a security company called Zimperium announced that it discovered a high-risk vulnerability named Stagefright inside the Android operating system. They deemed it as a unicorn in the world of Android risk, and it was practically demonstrated in one of the hacking conferences in the US on August 5, 2015. More information can be found at https://blog.zimperium.com/stagefright-vulnerability-details-stagefright-detector-tool-released/; a public exploit is available at https://www.exploit-db/exploits/38124/. This has made Google release security patches for all Android operating systems, which is believed to be 95% of the Android devices, which is an estimated 950 million users. The vulnerability is exploited through a particular library, which can let attackers take control of an Android device by sending a specifically crafted multimedia services like Multimedia Messaging Service (MMS). If we take a look at the superuser application downloads from the Play Store, there are around 1 million to 5 million downloads. It can be assumed that a major portion of Android smartphones are rooted. The following graphs show the Android vulnerabilities from 2009 until September 2015. There are currently 54 reported vulnerabilities for the Android Google operating system (for more information, visit http://www.cvedetails.com/product/19997/Google-Android.html?vendor_id=1224). More features that are introduced to the operating system in the form of applications act as additional entry points that allow cyber attackers or security researchers to circumvent and bypass the controls that were put in place. iOS vulnerabilities On June 18, 2015, password stealing vulnerability, also known as Cross Application Reference Attack (XARA), was outlined for iOS and OS X. It cracked the keychain services on jailbroken and non-jailbroken devices. The vulnerability is similar to cross-site request forgery attack in web applications. In spite of Apple's isolation protection and its App Store's security vetting, it was possible to circumvent the security controls mechanism. It clearly provided the need to protect the cross-app mechanism between the operating system and the app developer. Apple rolled a security update week after the XARA research. More information can be found at http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/ The following graphs show the vulnerabilities in iOS from 2007 until September 2015. There are around 605 reported vulnerabilities for Apple iPhone OS (for more information, visit http://www.cvedetails.com/product/15556/Apple-Iphone-Os.html?vendor_id=49). As you can see, the vulnerabilities kept on increasing year after year. A majority of the vulnerabilities reported are denial-of-service attacks. This vulnerability makes the application unresponsive. Primarily, the vulnerabilities arise due to insecure libraries or overwriting with plenty of buffer in the stacks. Rooting/jailbreaking Rooting/jailbreaking refers to the process of removing the limitations imposed by the operating system on devices through the use of exploit tools. Rooting/jailbreaking enables users to gain complete control over the operating system of a device. OWASP's top ten mobile risks In 2013, OWASP polled the industry for new vulnerability statistics in the field of mobile applications. The following risks were finalized in 2014 as the top ten dangerous risks as per the result of the poll data and mobile application threat landscape: M1: Weak server-side controls: Internet usage via mobiles has surpassed fixed Internet access. This is largely due to the emergence of hybrid and HTML5 mobile applications. Application servers that form the backbone of these applications must be secured on their own. The OWASP top 10 web application project defines the most prevalent vulnerabilities in this realm. Vulnerabilities such as injections, insecure direct object reference, insecure communication, and so on may lead to the complete compromise of an application server. Adversaries who have gained control over the compromised servers can push malicious content to all the application users and compromise user devices as well. M2: Insecure data storage: Mobile applications are being used for all kinds of tasks such as playing games, fitness monitors, online banking, stock trading, and so on, and most of the data used by these applications are either stored in the device itself inside SQLite files, XML data stores, log files, and so on, or they are pushed on to Cloud storage. The types of sensitive data stored by these applications may range from location information to bank account details. The application programing interfaces (API) that handle the storage of this data must securely implement encryption/hashing techniques so that an adversary with direct access to these data stores via theft or malware will not be able to decipher the sensitive information that's stored in them. M3: Insufficient transport layer protection: "Insecure Data Storage", as the name says, is about the protection of data in storage. But as all the hybrid and HTML 5 apps work on client-server architecture, emphasis on data in motion is a must, as the data will have to traverse through various channels and will be susceptible to eavesdropping and tampering by adversaries. Controls such as SSL/TLS, which enforce confidentiality and integrity of data, must be verified for correct implementations on the communication channel from the mobile application and its server. M4: Unintended data leakage: Certain functionalities of mobile applications may place users' sensitive data in locations where it can be accessed by other applications or even by malware. These functionalities may be there in order to enhance the usability or user experience but may pose adverse effects in the long run. Actions such as OS data caching, key press logging, copy/paste buffer caching, and implementations of web beacons or analytics cookies for advertisement delivery can be misused by adversaries to gain information about users. M5: Poor authorization and authentication: As mobile devices are the most "personal" devices, developers utilize this to store important data such as credentials locally in the device itself and come up with specific mechanisms to authenticate and authorize users locally for the services that users request via the application. If these mechanisms are poorly developed, adversaries may circumvent these controls and unauthorized actions can be performed. As the code is available to adversaries, they can perform binary attacks and recompile the code to directly access authorized content. M6: Broken cryptography: This is related to the weak controls that are used to protect data. Using weak cryptographic algorithms such as RC2, MD5, and so on, which can be cracked by adversaries, will lead to encryption failure. Improper encryption key management when a key is stored in locations accessible to other applications or the use of a predictable key generation technique will also break the implemented cryptography techniques. M7: Client-side injection: Injection vulnerabilities are the most common web vulnerabilities according to OWASP web top 10 dangerous risks. These are due to malformed inputs, which cause unintended action such as an alteration of database queries, command execution, and so on. In case of mobile applications, malformed inputs can be a serious threat at the local application level and server side as well (refer to M1: Weak server-side controls). Injections at a local application level, which mainly target data stores, may result in conditions such as access to paid content that's locked for trial users or file inclusions that may lead to an abuse of functionalities such as SMSes. M8: Security decisions via untrusted inputs: An implementation of certain functionalities such as the use of hidden variables to check authorization status can be bypassed by tampering them during the transit via web service calls or inter-process communication calls. This may lead to privilege escalations and unintended behavior of mobile applications. M9: Improper session handling: The application server sends back a session token on successful authentication with the mobile application. These session tokens are used by the mobile application to request for services. If these session tokens remain active for a longer duration and adversaries obtain them via malware or theft, the user account can be hijacked. M10: Lack of binary protection: A mobile application's source code is available to all. An attacker can reverse engineer the application and insert malicious code components and recompile them. If these tampered applications are installed by a user, they will be susceptible to data theft and may be the victims of unintended actions. Most applications do not ship with mechanisms such as checksum controls, which help in deducing whether the application is tampered or not. In 2015, there was another poll under the OWASP Mobile security group named the "umbrella project". This leads us to have M10 to M2, the trends look at binary protection to take over weak server-side controls. However, we will have wait until the final list for 2015. More details can be found at https://www.owasp.org/images/9/96/OWASP_Mobile_Top_Ten_2015_-_Final_Synthesis.pdf. Vulnerable applications to practice The open source community has been proactively designing plenty of mobile applications that can be utilized for practical tests. These are specifically designed to understand the OWASP top ten risks. Some of these applications are as follows: iMAS: iMAS is a collaborative research project initiated by the MITRE corporation (http://www.mitre.org/). This is for application developers and security researchers who would like to learn more about attack and defense techniques in iOS. More information about iMAS can be found at https://github.com/project-imas/about. GoatDroid: A simple functional mobile banking application for training with location tracking developed by Jack and Ken for Android application security is a great starting point for beginners. More information about GoatDroid can be found at https://github.com/jackMannino/OWASP-GoatDroid-Project. iGoat: The OWASP's iGOAT project is similar to the WebGoat web application framework. It's designed to improve the iOS assessment techniques for developers. More information on iGoat can be found at https://code.google.com/p/owasp-igoat/. Damn Vulnerable iOS Application (DVIA): DVIA is an iOS application that provides a platform for developers, testers, and security researchers to test their penetration testing skills. This application covers all the OWASP's top 10 mobile risks and also contains several challenges that one can solve and come up with custom solutions. More information on the Damn Vulnerable iOS Application can be found at http://damnvulnerableiosapp.com/. MobiSec: MobiSec is a live environment for the penetration testing of mobile environments. This framework provides devices, applications, and supporting infrastructure. It provides a great exercise for testers to view vulnerabilities from different points of view. More information on MobiSec can be found at http://sourceforge.net/p/mobisec/wiki/Home/. Android application sandboxing Android utilizes the well-established Linux protection ring model to isolate applications from each other. In Linux OS, assigning unique ID segregates every user. This ensures that there is no cross account data access. Similarly in Android OS, every app is assigned with its own unique ID and is run as a separate process. As a result, an application sandbox is formed at the kernel level, and the application will only be able to access the resources for which it is permitted to access. This subsequently ensures that the app does not breach its work boundaries and initiate any malicious activity. For example, the following screenshot provides an illustration of the sandbox mechanism: From the preceding Android Sandbox illustration, we can see how the unique Linux user ID created per application is validated every time a resource mapped to the app is accessed, thus ensuring a form of access control. Android Studio and SDK On May 16, 2013 at the Google I/O conference, an Integrated Development Environment (IDE) was released by Katherine Chou under Apache license 2.0; it was called Android Studio and it's used to develop apps on the Android platform. It entered the beta stage in 2014, and the first stable release was on December 2014 from Version 1.0 and it has been announced the official IDE on September 15, 2015. Information on Android Studio and SDK is available at http://developer.android.com/tools/studio/index.html#build-system. Android Studio and SDK heavily depends on the Java SE Development Kit. Java SE Development Kit can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Some developers prefer different IDEs such as eclipse. For them, Google only offers SDK downloads (http://dl.google.com/android/installer_r24.4.1-windows.exe). There are minimum system requirements that need to be fulfilled in order to install and use the Android Studio effectively. The following procedure is used to install the Android Studio on Windows 7 Professional 64-bit Operating System with 4 GB RAM, 500 Gig Hard Disk Space, and Java Development Kit 7 installed: Install the IDE available for Linux, Windows, and Mac OS X. Android Studio can be downloaded by visiting http://developer.android.com/sdk/index.html. Once the Android Studio is downloaded, run the installer file. By default, an installation window will be shown, as shown in the following screenshot. Click on Next: This setup will automatically check whether the system meets the requirements. Choose all the components that are required and click on Next. It is recommended to read and accept the license and click on Next. It is always recommended to create a new folder to install the tools that will help us track all the evidence in a single place. In this case, we have created a folder called Hackbox in C:, as shown in the following screenshot: Now, we can allocate the space required for the Android-accelerated environment, which will provide better performance. So, it is recommended to allocate a minimum of 2GB for this space. All the necessary files will be extracted to C:Hackbox. Once the installation is complete, you will be able to launch Android Studio, as shown in the following screenshot: Android SDK Android SDK provides developers with the ability to completely build, test, and debug apps that run on the Android platform. It has all the relevant software libraries, APIs, system images of the emulators, documentations, and other tools that help create an Android app. We have installed Android Studio with Android SDK. It is crucial to understand how to utilize the in-built SDK tools as much as possible. This section provides an overview of some of the critical tools that we will be using when attacking an Android app during the penetration testing activity. Emulator, simulators, and real devices Sometimes, we tend to believe that all virtual emulations work in exactly the same way in real devices, which is not really the case. Especially for Android, we have multiple OEMs manufacturing multiple devices, with different chipsets running different versions of Android. It would be challenge for developers to ensure that all the functionalities for the app reflect in the same way in all devices. It is very crucial to understand the difference between an emulator, simulator, and real devices. Simulators An objective of a simulator is to simulate the state of an object, which is exactly the same state as that of an object. It is preferable that the testing happens when a mobile interacts with some of the natural behavior of the available resources. These are reimplementations of the original software applications that are written, and they are difficult to debug and are mostly writing in high-level languages. Emulators Emulators predominantly aim at replicating the closest possible behavior of mobile devices. These are typically used to test a mobile's behavior internally, such as hardware, software, and firmware updates. These are typically written in machine-level languages and are easy to debug. This is again the reimplementation of the real software. Pros Fast, simple, and little or no price associated Emulators/simulators are quickly available to test the majority of the functionality of the app that is being developed It is very easy to find the defects using emulators and fix issues Cons The risk of false positives is increased; some of the functions or protection may actually not work on a real device. Differences in software and hardware will arise. Some of the emulators might be able to mimic the hardware. However, it may or may not work when it is actually installed on that particular hardware in reality. There's a lack of network interoperability. Since emulators are not really connected to a Wi-Fi or cellular network, it may not be possible to test network-based risks/functions. Real devices Real devices are physical devices that a user will be interacting with. There are pros and cons of real devices too. Pros Lesser false positives: Results are accurate Interoperability: All the test cases are on a live environment User experience: Real user experience when it comes to the CPU utilization, memory, and so on for a provided device Performance: Performance issues can be found quickly with real handsets Cons Costs: There are plenty of OEMs, and buying all the devices is not viable. A slowdown in development: It may not be possible to connect an IDE and than emulators. This will significantly slow down the development process. Other issues: The devices that are locally connected to the workstation will have to ensure that USB ports are open, thus opening an additional entry point. Threats A threat is something that can harm an asset that we are trying to protect. In mobile device security, a threat is a possible danger that might exploit a vulnerability to compromise and cause potential harm to a device. A threat can be defined by the motives; it can be any of the following ones: Intentional: An individual or a group with an aim to break an application and steal information Accidental: The malfunctioning of a device or an application may lead to a potential disclosure of sensitive information Others: Capabilities, circumstantial, and so on Threat agents A threat agent is used to indicate an individual or a group that can manifest a threat. Threat agents will be able to perform the following actions: Access Misuse Disclose Modify Deny access Vulnerability The security weakness within a system that might allow attackers to exploit it and break the security of the device is called a vulnerability. For example, if a mobile device is stolen and it does not have the PIN or pass code enabled, the phone is vulnerable to data theft. Risk The intersection between asset (A), threat (T), and vulnerability (V) is a risk. However, a risk can be included along with the probability (P) of the threat occurrences to provide more value to the business. Risk = A x T x V x P These terms will help us understand the real risk to a given asset. Business will be benefited only if these risks are accurately assessed. Understanding threat, vulnerability, and risk is the first step in threat modeling. For a given application, no vulnerabilities or a vulnerability with no threats is considered to be a low risk. Summary In this article, we saw that mobile devices are susceptible to attacks through various threats, which exist due to the lack of sufficient security measures that can be implemented at various stages of the development of a mobile application. It is necessary to understand how these threats are manifested and learn how to test and mitigate them effectively. Proper knowledge of the underlying architecture and the tools available for the testing of mobile applications will help developers and security testers alike in order to protect end users from attackers who may be attempting to leverage these vulnerabilities.
Read more
  • 0
  • 0
  • 8926

article-image-cluster-basics-and-installation-centos-7
Packt
01 Feb 2016
8 min read
Save for later

Cluster Basics and Installation On CentOS 7

Packt
01 Feb 2016
8 min read
In this article by Gabriel A. Canepa, author of the book CentOS High Performance, we will review the basic principles of clustering and show you, step by step, how to set up two CentOS 7 servers as nodes to later use them as members of a cluster. (For more resources related to this topic, see here.) As part of this process, we will install CentOS 7 from scratch in a brand new server as our first cluster member, along with the necessary packages, and finally, configure key-based authentication for SSH access from one node to the other. Clustering fundamentals In computing, a cluster consists of a group of computers (which are referred to as nodes or members) that work together so that the set is seen as a single system from the outside. One typical cluster setup involves assigning a different task to each node, thus achieving a higher performance than if several tasks were performed by a single member on its own. Another classic use of clustering is helping to ensure high availability by providing failover capabilities to the set, where one node may automatically replace a failed member to minimize the downtime of one or several critical services. In either case, the concept of clustering implies not only taking advantage of the computing functionality of each member alone, but also maximizing it by complementing it with the others. As we just mentioned, HA (High-availability) clusters aim to eliminate system downtime by failing services from one node to another in case one of them experiences an issue that renders it inoperative. As opposed to switchover, which requires human intervention, a failover procedure is performed automatically by the cluster without any downtime. In other words, this operation is transparent to end users and clients from outside the cluster. On the other hand, HP (High-performance) clusters use their nodes to perform operations in parallel in order to enhance the performance of one or more applications. High-performance clusters are typically seen in scenarios involving applications that use large collections of data. Why CentOS? Just as the saying goes, Every journey begins with a small step, we will begin our own journey toward clustering by setting up the separate nodes that will make up our system. Our choice of operating system is Linux and CentOS, version 7, as the distribution, that being the latest available release of CentOS as of today. The binary compatibility with Red Hat Enterprise Linux © (which is one of the most well-used distributions in enterprise and scientific environments) along with its well-proven stability are the reasons behind this decision. CentOS 7 along with its previous versions of the distribution are available for download, free of charge, from the project's website at http://www.centos.org/. In addition, specific details about the release can always be consulted in the CentOS wiki, http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7. Among the distinguishing features of CentOS 7, I would like to name the following: It includes systemd as the central system management and configuration utility It uses XFS as the default filesystem It only supports the x86_64 architecture Downloading CentOS To download CentOS, go to http://www.centos.org/download/ and click on one of the three options outlined in the following figure: Download options for CentOS 7 These options are detailed as follows: DVD ISO (~4 GB) is an .iso file that can be burned into regular DVD optical media and includes the common tools. Download this file if you have immediate access to a reliable Internet connection that you can use to download other packages and utilities. Everything ISO (~7 GB) is an .iso file with the complete set of packages that are made available in the base repository of CentOS 7. Download this file if you do not have access to a reliable Internet connection or if your plan contemplates the possibility of installing or populating a local or network mirror. The alternative downloads link will take you to a public directory within an official nearby CentOS mirror, where the previous options are available as well as others, including different choices of desktop versions (GNOME or KDE) and the minimal .iso file (~570 MB), which contains the bare bone packages of the distribution. As the minimal install is sufficient for our purpose at hand, we can install other needed packages using yum later, that is, the recommended .iso file to download. CentOS-7.X-YYMM-x86_64-Minimal.iso Here, X indicates the current update number of CentOS 7 and YYMM represent the year and month, both in two-digit notation, when the source code this version is based on was released. CentOS-7.0-1406-x86_64-Minimal.iso This tells us the source code this release is based on dates from the month of June, 2014. Independently of our preferred download method, we will need this .iso file in order to begin with the installation. In addition, feel free to burn it to optical media or a USB drive. Setting up CentOS 7 nodes If you do not have dedicated hardware that you can use to set up the nodes of your cluster, you can still create one using virtual machines over some virtualization software, such as Oracle Virtualbox © or VMware ©, for example. The following setup is going to be performed on a Virtualbox VM with 1 GB of RAM and 30 GB of disk space. We will use the default partitioning schema over LVM as suggested by the installation process. Installing CentOS 7 The splash screen shown in the following screenshot is the first step in the installation process. Highlight Install CentOS 7 using the up and down arrows and press Enter: Splash screen before starting the installation of CentOS 7 Select English (or your preferred installation language) and click on Continue, as shown in the following screenshot: Selecting the language for the installation of CentOS 7 In the following screenshot, you can choose a keyboard layout, set the current date and time, choose a partitioning method, connect the main network interface, and assign a unique hostname for the node. We will name the current node node01 and leave the rest of the settings as default (we will configure the extra network card later). Then, click on Begin installation: Configure keyboard layout, date and time, network and hostname, and partitioning schema While the installation continues in the background, we will be prompted to set the password for the root account and create an administrative user for the node. Once these steps have been confirmed, the corresponding warnings no longer appear, as shown in the following screenshot: Setting the password for root and creating an administrative user account When the process is completed, click on Finish configuration and the installation will finish configuring the system and devices. When the system is ready to boot on its own, you will be prompted to do so. Remove the installation media and click on Reboot. Now, we can proceed with setting up our network interfaces. Setting up the network infrastructure Our rather basic network infrastructure consists of 2 CentOS 7 boxes, with the node01 [192.168.0.2] and node02 [192.168.0.3] host names, respectively, and a gateway router called simply gateway [192.168.0.1]. In CentOS, network cards are configured using scripts in the /etc/sysconfig/network-scripts directory. This is the minimum content that is needed in /etc/sysconfig/network-scripts/ifcfg-enp0s3 for our purposes: HWADDR="08:00:27:C8:C2:BE" TYPE="Ethernet" BOOTPROTO="static" NAME="enp0s3" ONBOOT="yes" IPADDR="192.168.0.2" NETMASK="255.255.255.0" GATEWAY="192.168.0.1" PEERDNS="yes" DNS1="8.8.8.8" DNS2="8.8.4.4" Note that the UUID and HWADDR values will be different in your case. In addition, be aware that cluster machines need to be assigned a static IP address—never leave that up to DHCP! In the preceding configuration file, we used Google's DNS, but if you wish, feel free to use another DNS. When you're done making changes, save the file and restart the network service in order to apply them: systemctl restart network.service # Restart the network service You can verify that the previous changes have taken effect (shown in the Restarting the network service and verifying settings figure) with the following two commands: systemctl status network.service # Display the status of the network service And the changes have also taken effect due to this command: ip addr | grep 'inet addr' # Display the IP addresse Restarting the network service and verifying settings You can disregard all error messages related to the loopback interface, as shown in preceding screenshot. However, you will need to examine carefully any error messages related to the enp0s3 interface, if any, and get them resolved in order to proceed further. The second interface will be called enp0sX, where X is typically 8. You can verify with the following command (shown in the following figure): ip link show Displaying NIC information As for the configuration file of enp0s8, you can safely create it, copying the contents of ifcfg-enp0s3. Do not forget, however, to change the hardware (MAC) address as returned by the information on the NIC and leave the IP address field blank for now. ip link show enp0s8 cp /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-enp0s8 Then, restart the network service. Note that you will also need to set up at least a basic DNS resolution method. Considering that we will set up a cluster with 2 nodes only, we will use /etc/hosts for this purpose. Edit /etc/hosts with the following content: 192.168.0.2 node01 192.168.0.3 node02 192.168.0.1 gateway Summary In this article, we reviewed how to install the operating system and listed the necessary software components to implement the basic cluster functionality. Resources for Article: Further resources on this subject: CentOS 7's new firewalld service[article] Mastering CentOS 7 Linux Server[article] Resource Manager on CentOS 6[article]
Read more
  • 0
  • 0
  • 19543
article-image-scenes-and-menus
Packt
01 Feb 2016
19 min read
Save for later

Scenes and Menus

Packt
01 Feb 2016
19 min read
In this article by Siddharth Shekar, author of the book Cocos2d Cross-Platform Game Development Cookbook, Second Edition, we will cover the following recipes: Adding level selection scenes Scrolling level selection scenes (For more resources related to this topic, see here.) Scenes are the building blocks of any game. Generally, in any game, you have the main menu scene in which you are allowed to navigate to different scenes, such as GameScene, OptionsScene, and CreditsScene. In each of these scenes, you have menus. Similarly in MainScene, there is a play button that is part of a menu that, when pressed, takes the player to GameScene, where the gameplay code runs. Adding level selection scenes In this section, we will take a look at how to add a level selection scene in which you will have buttons for each level you want to play, and if you select it, this particular level will load up. Getting ready To create a level selection screen, you will need a custom sprite that will show a background image of the button and a text showing the level number. We will create these buttons first. Once the button sprites are created, we will create a new scene that we will populate with the background image, name of the scene, array of buttons, and a logic to change the scene to the particular level. How to do it... We will create a new Cocoa Touch class with CCSprite as the parent class and call it LevelSelectionBtn. Then, we will open up the LevelSelectionBtn.h file and add the following lines of code in it: #import "CCSprite.h" @interface LevelSelectionBtn : CCSprite -(id)initWithFilename:(NSString *) filename   StartlevelNumber:(int)lvlNum; @end We will create a custom init function; in this, we will pass the name of the file of the image, which will be the base of the button and integer that will be used to display the text at the top of the base button image. This is all that is required for the header class. In the LevelSelectionBtn.m file, we will add the following lines of code: #import "LevelSelectionBtn.h" @implementation LevelSelectionBtn -(id)initWithFilename:(NSString *) filename StartlevelNumber: (int)lvlNum; {   if (self = [super initWithImageNamed:filename]) {     CCLOG(@"Filename: %@ and levelNUmber: %d", filename, lvlNum);     CCLabelTTF *textLabel = [CCLabelTTF labelWithString:[NSString       stringWithFormat:@"%d",lvlNum ] fontName:@"AmericanTypewriter-Bold" fontSize: 12.0f];     textLabel.position = ccp(self.contentSize.width / 2, self.contentSize.height / 2);     textLabel.color = [CCColor colorWithRed:0.1f green:0.45f blue:0.73f];     [self addChild:textLabel];   }   return self; } @end In our custom init function, we will first log out if we are sending the correct data in. Then, we will create a text label and pass it in as a string by converting the integer. The label is then placed at the center of the current sprite base image by dividing the content size of the image by half to get the center. As the background of the base image and the text both are white, the color of the text is changed to match the color blue so that the text is actually visible. Finally, we will add the text to the current class. This is all for the LevelSelectionBtn class. Next, we will create LevelSelectionScene, in which we will add the sprite buttons and the logic that the button is pressed for. So, we will now create a new class, LevelSelectionScene, and in the header file, we will add the following lines: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   NSMutableArray *buttonSpritesArray; } +(CCScene*)scene; @end Note that apart from the usual code, we also created NSMutuableArray called buttonsSpritesArray, which will be used in the code. Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionScene +(CCScene*)scene{     return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     //Add Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@ "Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     //add text heading for file     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:@     "LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //initialize array     buttonSpritesArray = [NSMutableArray array];     int widthCount = 5;     int heightCount = 5;     float spacing = 35.0f;     float halfWidth = winSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = winSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = 1;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtnalloc]           initWithFilename:@"btnBG.png"           StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } Here, we will add the background image and heading text for the scene and initialize NSMutabaleArray. We will then create six new variables, as follows: WidthCount: This is the number of columns we want to have heightCount: This is the number of rows we want spacing: This is the distance between each of the sprite buttons so that they don't overlap halfWidth: This is the distance in the x axis from the center of the screen to upper-left position of the first sprite button that will be placed halfHeight: This is the distance in the y direction from the center to the upper-left position of the first sprite button that will be placed lvlNum: This is the counter with an initial value of 1. This is incremented each time a button is created to show the text in the button sprite. In the double loop, we will get the x and y coordinates of each of the button sprites. First, to get the y position from the half height, we will subtract the spacing multiplied by the j counter. As the value of j is initially 0, the y value remains the same as halfWidth for the topmost row. Then, for the x value of the position, we will add half the width of the spacing multiplied by the i counter. Each time, the x position is incremented by the spacing. After getting the x and y position, we will create a new LevelSelectionBtn sprite and pass in the btnBG.png image and also pass in the value of lvlNum to create the button sprite. We will set the position to the value of x and y that we calculated earlier. To refer to the button by number, we will assign the name of the sprite, which is the same as the number of the level. So, we will convert lvlNum to a string and pass in the value. Then, the button will be added to the scene, and it will also be added to the array we created globally as we will need to cycle through the images later. Finally, we will increment the value of lvlNum. However, we have still not added any interactivity to the sprite buttons so that when it is pressed, it will load the required level. For added touch interactivity, we will use the touchBegan function built right into Cocos2d. We will create more complex interfaces, but for now, we will use the basic touchBegan function. In the same file, we will add the following code right between the init function and @end: -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCrossFadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];       self.userInteractionEnabled = false;     }   } } The touchBegan function will be called each time we touch the screen. So, once we touch the screen, it gets the location of where you touched and stores it as a variable called location. Then, using the for in loop, we will loop through all the button sprites we added in the array. Using the RectContainsPoint function, we will check whether the location that we pressed is inside the rect of any of the sprites in the loop. We will then log out so that we will get an indication in the console as to which button number we have clicked on so that we can be sure that the right level is loaded. A crossfade transition is created, and the current scene is swapped with GameplayScene with the name of the current sprite clicked on. Finally, we have to set the userInteractionEnabled Boolean false so that the current class stops listening to the touch. Also, at the top of the class in the init function, we enabled this Boolean, so we will add the following line of code as highlighted in the init function:     if(self = [super init]){       self.userInteractionEnabled = TRUE;       CGSize  winSize = [[CCDirector sharedDirector]viewSize]; How it works... So, we are done with the LevelSelectionScene class, but we still need to add a button in MainScene to open LevelSelectionScene. In MainScene, we will add the following lines in the init function, in which we will add menubtn and a function to be called once the button is clicked on as highlighted here:         CCButton *playBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"playBtn_normal.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@ "playBtn_pressed.png"]           disabledSpriteFrame:nil];         [playBtn setTarget:self selector:@selector(playBtnPressed:)];          CCButton *menuBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           disabledSpriteFrame:nil];          [menuBtn setTarget:self selector:@selector(menuBtnPressed:)];         CCLayoutBox * btnMenu;         btnMenu = [[CCLayoutBox alloc] init];         btnMenu.anchorPoint = ccp(0.5f, 0.5f);         btnMenu.position = CGPointMake(winSize.width/2, winSize.height * 0.5);          btnMenu.direction = CCLayoutBoxDirectionVertical;         btnMenu.spacing = 10.0f;          [btnMenu addChild:menuBtn];         [btnMenu addChild:playBtn];          [self addChild:btnMenu]; Don't forget to include the menuBtn.png file included in the resources folder of the project, otherwise you will get a build error. Next, also add in the menuBtnPressed function, which will be called once menuBtn is pressed and released, as follows: -(void)menuBtnPressed:(id)sender{   CCLOG(@"menu button pressed");   CCTransition *transition = [CCTransition transitionCrossFadeWith Duration:0.20];   [[CCDirector sharedDirector]replaceScene:[[LevelSelectionScene alloc]init] withTransition:transition]; } Now, the MainScene should similar to the following: Click on the menu button below the play button, and you will be able to see LevelSelectionScreen in all its glory. Now, click on any of the buttons to open up the gameplay scene displaying the number that you clicked on. In this case, I clicked on button number 18, which is why it shows 18 in the gameplay scene when it loads. Scrolling level selection scenes If your game has say 20 levels, it is okay to have one single level selection scene to display all the level buttons; but what if you have more? In this section, we will modify the previous section's code, create a node, and customize the class to create a scrollable level selection scene. Getting ready We will create a new class called LevelSelectionLayer, inherit from CCNode, and move all the content we added in LevelSelectionScene to it. This is done so that we can have a separate class and instantiate it as many times as we want in the game. How to do it... In the LevelSelectionLayer.m file, we will change the code to the following: #import "CCNode.h" @interface LevelSelectionLayer : CCNode {   NSMutableArray *buttonSpritesArray; } -(id)initLayerWith:(NSString *)filename   StartlevelNumber:(int)lvlNum   widthCount:(int)widthCount   heightCount:(int)heightCount   spacing:(float)spacing; @end We changed the init function so that instead of hardcoding the values, we can create a more flexible level selection layer. In the LevelSelectionLayer.m file, we will add the following: #import "LevelSelectionLayer.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionLayer - (void)onEnter{   [super onEnter];   self.userInteractionEnabled = YES; } - (void)onExit{   [super onExit];   self.userInteractionEnabled = NO; } -(id)initLayerWith:(NSString *)filename StartlevelNumber:(int)lvlNum widthCount:(int)widthCount heightCount:(int)heightCount spacing: (float)spacing{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     self.contentSize = winSize;     buttonSpritesArray = [NSMutableArray array];     float halfWidth = self.contentSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = self.contentSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = lvlNum;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtn alloc]         initWithFilename:filename StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   CCLOG(@"location: %f, %f", location.x, location.y);   CCLOG(@"touched");   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCross FadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];     }   } } @end The major changes are highlighted here. The first is that we added and removed the touch functionality using the onEnter and onExit functions. The other major change is that we set the contentsize value of the node to winSize. Also, while specifying the upper-left coordinate of the button, we did not use winsize for the center but the contentsize of the node. Let's move to LevelSelectionScene now; we will execute the following code: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   int layerCount;   CCNode *layerNode; } +(CCScene*)scene; @end In the header file, we will change it to add two global variables in it: The layerCount variable keeps the total layers and nodes you add The layerNode variable is an empty node added for convenience so that we can add all the layer nodes to it so that we can move it back and forth instead of moving each layer node individually Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" #import "LevelSelectionLayer.h" @implementation LevelSelectionScene +(CCScene*)scene{   return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     layerCount = 1;     //Basic CCSprite - Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@"Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:     @"LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //empty node     layerNode = [[CCNode alloc]init];     [self addChild:layerNode];     int widthCount = 5;     int heightCount = 5;     float spacing = 35;     for(int i=0; i<3; i++){       LevelSelectionLayer* lsLayer = [[LevelSelectionLayer alloc]initLayerWith:@"btnBG.png"         StartlevelNumber:widthCount * heightCount * i + 1         widthCount:widthCount         heightCount:heightCount         spacing:spacing];       lsLayer.position = ccp(winSize.width * i, 0);       [layerNode addChild:lsLayer];     }     CCButton *leftBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       disabledSpriteFrame:nil];     [leftBtn setTarget:self selector:@selector(leftBtnPressed:)];     CCButton *rightBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       disabledSpriteFrame:nil];     [rightBtn setTarget:self selector:@selector(rightBtnPressed:)];     CCLayoutBox * btnMenu;     btnMenu = [[CCLayoutBox alloc] init];     btnMenu.anchorPoint = ccp(0.5f, 0.5f);     btnMenu.position = CGPointMake(winSize.width * 0.5, winSize.height * 0.2);     btnMenu.direction = CCLayoutBoxDirectionHorizontal;     btnMenu.spacing = 300.0f;     [btnMenu addChild:leftBtn];     [btnMenu addChild:rightBtn];     [self addChild:btnMenu z:4];   }   return self; } -(void)rightBtnPressed:(id)sender{   CCLOG(@"right button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount >=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(-winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount--;   } } -(void)leftBtnPressed:(id)sender{   CCLOG(@"left button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount <=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount++;   } } @end How it works... The important piece of the code is highlighted. Apart from adding the usual background and text, we will initialize layerCount to 1 and initialize the empty layerNode variable. Next, we will create a for loop, in which we will add the three level selection layers by passing the starting value of each selection layer in the btnBg image, the width count, height count, and spacing between each of the buttons. Also, note how the layers are positioned at a width's distance from each other. The first one is visible to the player. The consecutive layers are added off screen similarly to how we placed the second image offscreen while creating the parallax effect. Then, each level selection layer is added to layerNode as a child. We will also create the left-hand side and right-hand side buttons so that we can move layerNode to the left and right once clicked on. We will create two functions called leftBtnPressed and rightBtnPressed in which we will add functionality when the left-hand side or right-hand side button gets pressed. First, let's look at the rightBtnPressed function. Once the button is pressed, we will log out this button. Next, we will get the size of the window. We will then check whether the value of layerCount is greater than zero, which is true as we set the value as 1. We will create a moveBy action, in which we give the window width for the movement in the x direction and 0 for the movement in the y direction as we want the movement to be only in the x direction and not y. Lastly, we will pass in a value of 0.20f. The action is then run on layerNode and the layerCount value is decremented. In the leftBtnPressed function, the opposite is done to move the layer in the opposite direction. Run the game to see the change in LevelSelectionScene. As you can't go left, pressing the left button won't do anything. However, if you press the right button, you will see that the layer scrolls to show the next set of buttons. Summary In this article, we learned about adding level selection scenes and scrolling level selection scenes in Cocos2d. Resources for Article: Further resources on this subject: Getting started with Cocos2d-x [article] Dragging a CCNode in Cocos2D-Swift [article] Run Xcode Run [article]
Read more
  • 0
  • 0
  • 10893

article-image-vertex-functions
Packt
01 Feb 2016
18 min read
Save for later

The Vertex Functions

Packt
01 Feb 2016
18 min read
In this article by Alan Zucconi, author of the book Unity 5.x Shaders and Effects Cookbook, we will see that the term shader originates from the fact that Cg has been mainly used to simulate realistic lighting conditions (shadows) on three-dimensional models. Despite this, shaders are now much more than that. They not only define the way objects are going to look, but also redefine their shapes entirely. If you want to learn how to manipulate the geometry of a three-dimensional object only via shaders, this article is for you. In this article, you will learn the following: Extruding your models Implementing a snow shader Implementing a volumetric explosion (For more resources related to this topic, see here.) In this article, we will explain that 3D models are not just a collection of triangles. Each vertex can contain data, which is essential for correctly rendering the model itself. This article will explore how to access this information in order to use it in a shader. We will also explore how the geometry of an object can be deformed simply using Cg code. Extruding your models One of the biggest problems in games is repetition. Creating new content is a time-consuming task and when you have to face a thousand enemies, the chances are that they will all look the same. A relatively cheap technique to add variations to your models is using a shader that alters its basic geometry. This recipe will show a technique called normal extrusion, which can be used to create a chubbier or skinnier version of a model, as shown in the following image with the soldier from the Unity camp (Demo Gameplay): Getting ready For this recipe, we need to have access to the shader used by the model that you want to alter. Once you have it, we will duplicate it so that we can edit it safely. It can be done as follows: Find the shader that your model is using and, once selected, duplicate it by pressing Ctrl+D. Duplicate the original material of the model and assign the cloned shader to it. Assign the new material to your model and start editing it. For this effect to work, your model should have normals. How to do it… To create this effect, start by modifying the duplicated shader as shown in the following: Let's start by adding a property to our shader, which will be used to modulate its extrusion. The range that is presented here goes from -1 to +1;however, you might have to adjust that according to your own needs, as follows: _Amount ("Extrusion Amount", Range(-1,+1)) = 0 Couple the property with its respective variable, as shown in the following: float _Amount; Change the pragma directive so that it now uses a vertex modifier. You can do this by adding vertex:function_name at the end of it. In our case, we have called the vertfunction, as follows: #pragma surface surf Lambert vertex:vert Add the following vertex modifier: void vert (inout appdata_full v) { v.vertex.xyz += v.normal * _Amount; } The shader is now ready; you can use the Extrusion Amount slider in the Inspectormaterial to make your model skinnier or chubbier. How it works… Surface shaders works in two steps: the surface function and the vertex modifier. It takes the data structure of a vertex (which is usually called appdata_full) and applies a transformation to it. This gives us the freedom to virtually do everything with the geometry of our model. We signal the graphics processing unit(GPU) that such a function exists by adding vertex:vert to the pragma directive of the surface shader. One of the most simple yet effective techniques that can be used to alter the geometry of a model is called normal extrusion. It works by projecting a vertex along its normal direction. This is done by the following line of code: v.vertex.xyz += v.normal * _Amount; The position of a vertex is displaced by the_Amount units toward the vertex normal. If _Amount gets too high, the results can be quite unpleasant. However, you can add lot of variations to your modelswith smaller values. There's more… If you have multiple enemies and you want each one to have theirown weight, you have to create a different material for each one of them. This is necessary as thematerials are normally shared between models and changing one will change all of them. There are several ways in which you can do this; the quickest one is to create a script that automatically does it for you. The following script, once attached to an object with Renderer, will duplicate its first material and set the _Amount property automatically, as follows: using UnityEngine; publicclassNormalExtruder : MonoBehaviour { [Range(-0.0001f, 0.0001f)] publicfloat amount = 0; // Use this for initialization void Start () { Material material = GetComponent<Renderer>().sharedMaterial; Material newMaterial = new Material(material); newMaterial.SetFloat("_Amount", amount); GetComponent<Renderer>().material = newMaterial; } } Adding extrusion maps This technique can actually be improved even further. We can add an extra texture (or using the alpha channel of the main one) to indicate the amount of the extrusion. This allows a better control over which parts are raised or lowered. The following code shows how it is possible to achieve such an effect: sampler2D _ExtrusionTex; void vert(inout appdata_full v) { float4 tex = tex2Dlod (_ExtrusionTex, float4(v.texcoord.xy,0,0)); float extrusion = tex.r * 2 - 1; v.vertex.xyz += v.normal * _Amount * extrusion; } The red channel of _ExtrusionTex is used as a multiplying coefficient for normal extrusion. A value of 0.5 leaves the model unaffected; darker or lighter shades are used to extrude vertices inward or outward, respectively. You should notice that to sample a texture in a vertex modifier, tex2Dlod should be used instead of tex2D. In shaders, colour channels go from 0 to 1.Although, sometimes, you need to represent negative values as well (such as inward extrusion). When this is the case, treat 0.5 as zero; having smaller values as negative and higher values as positive. This is exactly what happens with normals, which are usually encoded in RGB textures. The UnpackNormal()function is used to map a value in the (0,1) range on the (-1,+1)range. Mathematically speaking, this is equivalent to tex.r * 2 -1. Extrusion maps are perfect to zombify characters by shrinking the skin in order to highlight the shape of the bones underneath. The following image shows how a "healthy" soldier can be transformed into a corpse using a shader and an extrusion map. Compared to the previous example, you can notice how the clothing is unaffected. The shader used in the following image also darkens the extruded regions in order to give an even more emaciated look to the soldier:   Implementing a snow shader The simulation of snow has always been a challenge in games. The vast majority of games simply baked snow directly in the models textures so that their tops look white. However, what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a proper accumulation of material and it should be treated as so. This recipe will show how to give a snowy look to your models using just a shader. This effect is achieved in two steps. First, a white colour is used for all the triangles facing the sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can see the result in the following image:   Keep in mind that this recipe does not aim to create photorealistic snow effect. It provides a good starting point;however, it is up to an artist to create the right textures and find the right parameters to make it fit your game. Getting ready This effect is purely based on shaders. We will need to do the following: Create a new shader for the snow effect. Create a new material for the shader. Assign the newly created material to the object that you want to be snowy. How to do it… To create a snowy effect, open your shader and make the following changes: Replace the properties of the shader with the following ones: _MainColor("Main Color", Color) = (1.0,1.0,1.0,1.0) _MainTex("Base (RGB)", 2D) = "white" {} _Bump("Bump", 2D) = "bump" {} _Snow("Level of snow", Range(1, -1)) = 1 _SnowColor("Color of snow", Color) = (1.0,1.0,1.0,1.0) _SnowDirection("Direction of snow", Vector) = (0,1,0) _SnowDepth("Depth of snow", Range(0,1)) = 0 Complete them with their relative variables, as follows: sampler2D _MainTex; sampler2D _Bump; float _Snow; float4 _SnowColor; float4 _MainColor; float4 _SnowDirection; float _SnowDepth; Replace the Input structure with the following: struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA }; Replace the surface function with the following one. It will color the snowy parts of the model white: void surf(Input IN, inout SurfaceOutputStandard o) { half4 c = tex2D(_MainTex, IN.uv_MainTex); o.Normal = UnpackNormal(tex2D(_Bump, IN.uv_Bump)); if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; o.Alpha = 1; } Configure the pragma directive so that it uses a vertex modifiers, as follows: #pragma surface surf Standard vertex:vert Add the following vertex modifiers that extrudes the vertices covered in snow, as follows: void vert(inout appdata_full v) { float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; } You can now use the Inspectormaterial to select how much of your mode is going to be covered and how thick the snow should be. How it works… This shader works in two steps. Coloring the surface The first one alters the color of the triangles thatare facing the sky. It affects all the triangles with a normal direction similar to _SnowDirection. Comparing unit vectors can be done using the dot product. When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they are parallel to each other. The _Snowproperty is used to decide how aligned they should be in order to be considered facing the sky. If you look closely at the surface function, you can see that we are not directly dotting the normal and the snow direction. This is because they are usually defined in a different space. The snow direction is expressed in world coordinates, while the object normals are usually relative to the model itself. If we rotate the model, its normals will not change, which is not what we want. To fix this, we need to convert the normals from their object coordinates to world coordinates. This is done with the WorldNormalVector()function, as follows: if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; This shader simply colors the model white; a more advanced one should initialize the SurfaceOutputStandard structure with textures and parameters from a realistic snow material. Altering the geometry The second effect of this shader alters the geometry to simulate the accumulation of snow. Firstly, we identify the triangles that have been coloured white by testing the same condition used in the surface function. This time, unfortunately, we cannot rely on WorldNormalVector()asthe SurfaceOutputStandard structure is not yet initialized in the vertex modifier. We will use this other method instead, which converts _SnowDirection in objectcoordinates, as follows: float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); Then, we can extrude the geometry to simulate the accumulation of snow, as shown in the following: if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; Once again, this is a very basic effect. One could use a texture map to control the accumulation of snow more precisely or to give it a peculiar, uneven look. See also If you need high quality snow effects and props for your game, you can also check the following resources in the Asset Storeof Unity: Winter Suite ($30): A much more sophisticated version of the snow shader presented in this recipe can be found at: https://www.assetstore.unity3d.com/en/#!/content/13927 Winter Pack ($60): A very realistic set of props and materials for snowy environments are found at: https://www.assetstore.unity3d.com/en/#!/content/13316 Implementing a volumetric explosion The art of game development is a clever trade-off between realism and efficiency. This is particularly true for explosions; they are at the heart of many games, yet the physics behind them is often beyond the computational power of modern machines. Explosions are essentially nothing more than hot balls of gas; hence, the only way to correctly simulate them is by integrating a fluid simulation in your game. As you can imagine, this is infeasible for runtime applications and many games simply simulate them with particles. When an object explodes, it is common to simply instantiate many fire, smoke, and debris particles that can have believableresulttogether. This approach, unfortunately, is not very realistic and is easy to spot. There is an intermediate technique that can be used to achieve a much more realistic effect: the volumetric explosions. The idea behind this concept is that the explosions are not treated like a bunch of particlesanymore; they are evolving three-dimensional objects and not just flat two-dimensionaltextures. Getting ready Start this recipe with the following steps: Create a new shader for this effect. Create a new material to host the shader. Attach the material to a sphere. You can create one directly from the editor bynavigating to GameObject | 3D Object | Sphere. This recipe works well with the standard Unity Sphere;however, if you need big explosions, you might need to use a more high-poly sphere. In fact, a vertex function can only modify the vertices of a mesh. All the other points will be interpolated using the positions of the nearby vertices. Fewer vertices mean lower resolution for your explosions. For this recipe, you will also need a ramp texture that has, in a gradient, all the colors that your explosions will have. You can create the following texture using GIMP or Photoshop. The following is the one used for this recipe: Once you have the picture, import it to Unity. Then, from its Inspector, make sure the Filter Mode is set to Bilinear and the Wrap Mode to Clamp. These two settings make sure that the ramp texture is sampled smoothly. Lastly, you will need a noisy texture. You can find many of them on the Internet as freely available noise textures. The most commonly used ones are generated using Perlin noise. How to do it… This effect works in two steps: a vertex function to change the geometry and a surface function to give it the right color. The steps are as follows: Add the following properties for the shader: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Add their relative variables so that the Cg code of the shader can actually access them, as follows: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Change the Input structure so that it receives the UV data of the ramp texture, as shown in the following: struct Input { float2 uv_NoiseTex; }; Add the following vertex function: void vert(inout appdata_full v) { float3 disp = tex2Dlod(_NoiseTex, float4(v.texcoord.xy,0,0)); float time = sin(_Time[3] *_Period + disp.r*10); v.vertex.xyz += v.normal * disp.r * _Amount * time; } Add the following surface function: void surf(Input IN, inout SurfaceOutput o) { float3 noise = tex2D(_NoiseTex, IN.uv_NoiseTex); float n = saturate(noise.r + _RampOffset); clip(_ClipRange - n); half4 c = tex2D(_RampTex, float2(n,0.5)); o.Albedo = c.rgb; o.Emission = c.rgb*c.a; } We will specify the vertex function in the pragma directive, adding the nolightmapparameter to prevent Unity from adding realistic lightings to our explosion, as follows: #pragma surface surf Lambert vertex:vert nolightmap The last step is to select the material and attaching the two textures in the relative slotsfrom its inspector. This is an animated material, meaning that it evolves over time. You can watch the material changing in the editor by clicking on Animated Materials from the Scene window: How it works If you are reading this recipe, you are already familiar with how surface shaders and vertex modifiers work. The main idea behind this effect is to alter the geometry of the sphere in a seemingly chaotic way, exactly like it happens in a real explosion. The following image shows how such explosion will look in the editor. You can see that the original mesh has been heavily deformed in the following image: The vertex function is a variant of the technique called normal extrusion. The difference here is that the amount of the extrusion is determined by both the time and the noise texture. When you need a random number in Unity, you can rely on the Random.Range()function. There is no standard way to get random numbers within a shader, therefore,the easiest way is to sample a noise texture. There is no standard way to do this, therefore, take the following only as an example: float time = sin(_Time[3] *_Period + disp.r*10); The built-in _Time[3]variable is used to get the current time from the shader and the red channel of the disp.rnoise texture is used to make sure that each vertex moves independently. The sin()function makes the vertices go up and down, simulating the chaotic behavior of an explosion. Then, the normal extrusion takes place as shown in the following: v.vertex.xyz += v.normal * disp.r * _Amount * time; You should play with these numbers and variables until you find a pattern of movement that you are happy with. The last part of the effect is achieved by the surface function. Here, the noise texture is used to sample a random color from the ramp texture. However, there are two more aspects that are worth noticing. The first one is the introduction of _RampOffset. Its usage forces the explosion to sample colors from the left or right side of the texture. With positive values, the surface of the explosion tends to show more grey tones— which is exactly what happens when it is dissolving. You can use _RampOffset to determine how much fire or smoke should be there in your explosion. The second aspect introduced in the surface function is the use of clip(). Theclip()function clips (removes) pixels from the rendering pipeline. When invoked with a negative value, the current pixel is not drawn. This effect is controlled by _ClipRange, which determines the pixels of the volumetric explosions that are going to be transparent. By controlling both _RampOffset and _ClipRange, you have full control to determine how the explosion behaves and dissolves. There's more… The shader presented in this recipe makes a sphere look like an explosion. If you really want to use it, you should couple it with some scripts in order to get the most out of it. The best thing to do is to create an explosion object and turn it to a prefab so that you can reuse it every time you need. You can do this by dragging the sphere back in the Project window. Once it is done, you can create as many explosions as you want using the Instantiate() function. However,it is worth noticing that all the objects with the same material share the same look. If you have multiple explosions at the same time, they should not use the same material. When you are instantiating a new explosion, you should also duplicate its material. You can do this easily with the following piece of code: GameObject explosion = Instantiate(explosionPrefab) as GameObject; Renderer renderer = explosion.GetComponent<Renderer>(); Material material = new Material(renderer.sharedMaterial); renderer.material = material; Lastly, if you are going to use this shader in a realistic way, you should attach a script to it, which changes its size—_RampOffsetor_ClipRange—accordingly to the type of explosion you want to recreate. See also A lot more can be done to make explosions realistic. The approach presented in this recipe only creates an empty shell; the explosion in it is actually empty. An easy trick to improve it is to create particles in it. However, you can only go so far with this. The short movie,The Butterfly Effect (http://unity3d.com/pages/butterfly), created by Unity Technologies in collaboration with Passion Pictures and Nvidia, is the perfect example. It is based on the same concept of altering the geometry of a sphere;however, it renders it with a technique called volume ray casting. In a nutshell, it renders the geometry as if it's complete. You can see the following image as an example:   If you are looking for high quality explosions, refer toPyro Technix (https://www.assetstore.unity3d.com/en/#!/content/16925) on the Asset Store. It includes volumetric explosions and couples them with realistic shockwaves. Summary In this article, we saw the recipes to extrude models and implement a snow shader and volumetric explosion. Resources for Article: Further resources on this subject: Lights and Effects [article] Looking Back, Looking Forward [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 34864
Modal Close icon
Modal Close icon