Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-advanced-data-access-patterns
Packt
11 Aug 2015
25 min read
Save for later

Advanced Data Access Patterns

Packt
11 Aug 2015
25 min read
In this article by, Suhas Chatekar, author of the book Learning NHibernate 4, we would dig deeper into that statement and try to understand what those downsides are and what can be done about them. In our attempt to address the downsides of repository, we would present two data access patterns, namely specification pattern and query object pattern. Specification pattern is a pattern adopted into data access layer from a general purpose pattern used for effectively filtering in-memory data. Before we begin, let me reiterate – repository pattern is not bad or wrong choice in every situation. If you are building a small and simple application involving a handful of entities then repository pattern can serve you well. But if you are building complex domain logic with intricate database interaction then repository may not do justice to your code. The patterns presented can be used in both simple and complex applications, and if you feel that repository is doing the job perfectly then there is no need to move away from it. (For more resources related to this topic, see here.) Problems with repository pattern A lot has been written all over the Internet about what is wrong with repository pattern. A simple Google search would give you lot of interesting articles to read and ponder about. We would spend some time trying to understand problems introduced by repository pattern. Generalization FindAll takes name of the employee as input along with some other parameters required for performing the search. When we started putting together a repository, we said that Repository<T> is a common repository class that can be used for any entity. But now FindAll takes a parameter that is only available on Employee, thus locking the implementation of FindAll to the Employee entity only. In order to keep the repository still reusable by other entities, we would need to part ways from the common Repository<T> class and implement a more specific EmployeeRepository class with Employee specific querying methods. This fixes the immediate problem but introduces another one. The new EmployeeRepository breaks the contract offered by IRepository<T> as the FindAll method cannot be pushed on the IRepository<T> interface. We would need to add a new interface IEmployeeRepository. Do you notice where this is going? You would end up implementing lot of repository classes with complex inheritance relationships between them. While this may seem to work, I have experienced that there are better ways of solving this problem. Unclear and confusing contract What happens if there is a need to query employees by a different criteria for a different business requirement? Say, we now need to fetch a single Employee instance by its employee number. Even if we ignore the above issue and be ready to add a repository class per entity, we would need to add a method that is specific to fetching the Employee instance matching the employee number. This adds another dimension to the code maintenance problem. Imagine how many such methods we would end up adding for a complex domain every time someone needs to query an entity using a new criteria. With several methods on repository contract that query same entity using different criteria makes the contract less clear and confusing for new developers. Such a pattern also makes it difficult to reuse code even if two methods are only slightly different from each other. Leaky abstraction In order to make methods on repositories reusable in different situations, lot of developers tend to add a single method on repository that does not take any input and return an IQueryable<T> by calling ISession.Query<T> inside it, as shown next: public IQueryable<T> FindAll() {    return session.Query<T>(); } IQueryable<T> returned by this method can then be used to construct any query that you want outside of repository. This is a classic case of leaky abstraction. Repository is supposed to abstract away any concerns around querying the database, but now what we are doing here is returning an IQueryable<T> to the consuming code and asking it to build the queries, thus leaking the abstraction that is supposed to be hidden into repository. IQueryable<T> returned by the preceding method holds an instance of ISession that would be used to ultimately interact with database. Since repository has no control over how and when this IQueryable would invoke database interaction, you might get in trouble. If you are using "session per request" kind of pattern then you are safeguarded against it but if you are not using that pattern for any reason then you need to watch out for errors due to closed or disposed session objects. God object anti-pattern A god object is an object that does too many things. Sometimes, there is a single class in an application that does everything. Such an implementation is almost always bad as it majorly breaks the famous single responsibility principle (SRP) and reduces testability and maintainability of code. A lot can be written about SRP and god object anti-pattern but since it is not the primary topic, I would leave the topic with underscoring the importance of staying away from god object anti-pattern. Avid readers can Google on the topic if they are interested. Repositories by nature tend to become single point of database interaction. Any new database interaction goes through repository. Over time, repositories grow organically with large number of methods doing too many things. You may spot the anti-pattern and decide to break the repository into multiple small repositories but the original single repository would be tightly integrated with your code in so many places that splitting it would be a difficult job. For a contained and trivial domain model, repository pattern can be a good choice. So do not abandon repositories entirely. It is around complex and changing domain that repositories start exhibiting the problems just discussed. You might still argue that repository is an unneeded abstraction and we can very well use NHibernate directly for a trivial domain model. But I would caution against any design that uses NHibernate directly from domain or domain services layer. No matter what design I use for data access, I would always adhere to "explicitly declare capabilities required" principle. The abstraction that offers required capability can be a repository interface or some other abstractions that we would learn. Specification pattern Specification pattern is a reusable and object-oriented way of applying business rules on domain entities. The primary use of specification pattern is to select subset of entities from a larger collection of entities based on some rules. An important characteristic of specification pattern is combining multiple rules by chaining them together. Specification pattern was in existence before ORMs and other data access patterns had set their feet in the development community. The original form of specification pattern dealt with in-memory collections of entities. The pattern was then adopted to work with ORMs such as NHibernate as people started seeing the benefits that specification pattern could bring about. We would first discuss specification pattern in its original form. That would give us a good understanding of the pattern. We would then modify the implementation to make it fit with NHibernate. Specification pattern in its original form Let's look into an example of specification pattern in its original form. A specification defines a rule that must be satisfied by domain objects. This can be generalized using an interface definition, as follows: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); } ISpecification<T> defines a single method IsSatisifedBy. This method takes the entity instance of type T as input and returns a Boolean value depending on whether the entity passed satisfies the rule or not. If we were to write a rule for employees living in London then we can implement a specification as follows: public class EmployeesLivingIn : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == "London"; } } The EmployeesLivingIn class implements ISpecification<Employee> telling us that this is a specification for the Employee entity. This specification compares the city from the employee's ResidentialAddress property with literal string "London" and returns true if it matches. You may be wondering why I have named this class as EmployeesLivingIn. Well, I had some refactoring in mind and I wanted to make my final code read nicely. Let's see what I mean. We have hardcoded literal string "London" in the preceding specification. This effectively stops this class from being reusable. What if we need a specification for all employees living in Paris? Ideal thing to do would be to accept "London" as a parameter during instantiation of this class and then use that parameter value in the implementation of the IsSatisfiedBy method. Following code listing shows the modified code: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == city; } } This looks good without any hardcoded string literals. Now if I wanted my original specification for employees living in London then following is how I could build it: var specification = new EmployeesLivingIn("London"); Did you notice how the preceding code reads in plain English because of the way class is named? Now, let's see how to use this specification class. Usual scenario where specifications are used is when you have got a list of entities that you are working with and you want to run a rule and find out which of the entities in the list satisfy that rule. Following code listing shows a very simple use of the specification we just implemented: List<Employee> employees = //Loaded from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London");   foreach(var employee in employees) { if(specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } We have a list of employees loaded from somewhere and we want to filter this list and get another list comprising of employees living in London. Till this point, the only benefit we have had from specification pattern is that we have managed to encapsulate the rule into a specification class which can be reused anywhere now. For complex rules, this can be very useful. But for simple rules, specification pattern may look like lot of plumbing code unless we overlook the composability of specifications. Most power of specification pattern comes from ability to chain multiple rules together to form a complex rule. Let's write another specification for employees who have opted for any benefit: public class EmployeesHavingOptedForBenefits : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.Benefits.Count > 0; } } In this rule, there is no need to supply any literal value from outside so the implementation is quite simple. We just check if the Benefits collection on the passed employee instance has count greater than zero. You can use this specification in exactly the same way as earlier specification was used. Now if there is a need to apply both of these specifications to an employee collection, then very little modification to our code is needed. Let's start with adding an And method to the ISpecification<T> interface, as shown next: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); ISpecification<T> And(ISpecification<T> specification); } The And method accepts an instance of ISpecification<T> and returns another instance of the same type. As you would have guessed, the specification that is returned from the And method would effectively perform a logical AND operation between the specification on which the And method is invoked and specification that is passed into the And method. The actual implementation of the And method comes down to calling the IsSatisfiedBy method on both the specification objects and logically ANDing their results. Since this logic does not change from specification to specification, we can introduce a base class that implements this logic. All specification implementations can then derive from this new base class. Following is the code for the base class: public abstract class Specification<T> : ISpecification<T> { public abstract bool IsSatisfiedBy(T entity);   public ISpecification<T> And(ISpecification<T> specification) {    return new AndSpecification<T>(this, specification); } } We have marked Specification<T> as abstract as this class does not represent any meaningful business specification and hence we do not want anyone to inadvertently use this class directly. Accordingly, the IsSatisfiedBy method is marked abstract as well. In the implementation of the And method, we are instantiating a new class AndSepcification. This class takes two specification objects as inputs. We pass the current instance and one that is passed to the And method. The definition of AndSpecification is very simple. public class AndSpecification<T> : Specification<T> { private readonly Specification<T> specification1; private readonly ISpecification<T> specification2;   public AndSpecification(Specification<T> specification1, ISpecification<T> specification2) {    this.specification1 = specification1;    this.specification2 = specification2; }   public override bool IsSatisfiedBy(T entity) {    return specification1.IsSatisfiedBy(entity) &&    specification2.IsSatisfiedBy(entity); } } AndSpecification<T> inherits from abstract class Specification<T> which is obvious. IsSatisfiedBy is simply performing a logical AND operation on the outputs of the ISatisfiedBy method on each of the specification objects passed into AndSpecification<T>. After we change our previous two business specification implementations to inherit from abstract class Specification<T> instead of interface ISpecification<T>, following is how we can chain two specifications using the And method that we just introduced: List<Employee> employees = null; //= Load from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London")                                     .And(new EmployeesHavingOptedForBenefits());   foreach (var employee in employees) { if (specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } There is literally nothing changed in how the specification is used in business logic. The only thing that is changed is construction and chaining together of two specifications as depicted in bold previously. We can go on and implement other chaining methods but point to take home here is composability that the specification pattern offers. Now let's look into how specification pattern sits beside NHibernate and helps in fixing some of pain points of repository pattern. Specification pattern for NHibernate Fundamental difference between original specification pattern and the pattern applied to NHibernate is that we had an in-memory list of objects to work with in the former case. In case of NHibernate we do not have the list of objects in the memory. We have got the list in the database and we want to be able to specify rules that can be used to generate appropriate SQL to fetch the records from database that satisfy the rule. Owing to this difference, we cannot use the original specification pattern as is when we are working with NHibernate. Let me show you what this means when it comes to writing code that makes use of specification pattern. A query, in its most basic form, to retrieve all employees living in London would look something as follows: var employees = session.Query<Employee>()                .Where(e => e.ResidentialAddress.City == "London"); The lambda expression passed to the Where method is our rule. We want all the Employee instances from database that satisfy this rule. We want to be able to push this rule behind some kind of abstraction such as ISpecification<T> so that this rule can be reused. We would need a method on ISpecification<T> that does not take any input (there are no entities in-memory to pass) and returns a lambda expression that can be passed into the Where method. Following is how that method could look: public interface ISpecification<T> where T : EntityBase<T> { Expression<Func<T, bool>> IsSatisfied(); } Note the differences from the previous version. We have changed the method name from IsSatisfiedBy to IsSatisfied as there is no entity being passed into this method that would warrant use of word By in the end. This method returns an Expression<Fund<T, bool>>. If you have dealt with situations where you pass lambda expressions around then you know what this type means. If you are new to expression trees, let me give you a brief explanation. Func<T, bool> is a usual function pointer. This pointer specifically points to a function that takes an instance of type T as input and returns a Boolean output. Expression<Func<T, bool>> takes this function pointer and converts it into a lambda expression. An implementation of this new interface would make things more clear. Next code listing shows the specification for employees living in London written against the new contract: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.ResidentialAddress.City == city; } } There is not much changed here compared to the previous implementation. Definition of IsSatisfied now returns a lambda expression instead of a bool. This lambda is exactly same as the one we used in the ISession example. If I had to rewrite that example using the preceding specification then following is how that would look: var specification = new EmployeeLivingIn("London"); var employees = session.Query<Employee>()                .Where(specification.IsSatisfied()); We now have a specification wrapped in a reusable object that we can send straight to NHibernate's ISession interface. Now let's think about how we can use this from within domain services where we used repositories before. We do not want to reference ISession or any other NHibernate type from domain services as that would break onion architecture. We have two options. We can declare a new capability that can take a specification and execute it against the ISession interface. We can then make domain service classes take a dependency on this new capability. Or we can use the existing IRepository capability and add a method on it which takes the specification and executes it. We started this article with a statement that repositories have a downside, specifically when it comes to querying entities using different criteria. But now we are considering an option to enrich the repositories with specifications. Is that contradictory? Remember that one of the problems with repository was that every time there is a new criterion to query an entity, we needed a new method on repository. Specification pattern fixes that problem. Specification pattern has taken the criterion out of the repository and moved it into its own class so we only ever need a single method on repository that takes in ISpecification<T> and execute it. So using repository is not as bad as it sounds. Following is how the new method on repository interface would look: public interface IRepository<T> where T : EntityBase<T> { void Save(T entity); void Update(int id, Employee employee); T GetById(int id); IEnumerable<T> Apply(ISpecification<T> specification); } The Apply method in bold is the new method that works with specification now. Note that we have removed all other methods that ran various different queries and replaced them with this new method. Methods to save and update the entities are still there. Even the method GetById is there as the mechanism used to get entity by ID is not same as the one used by specifications. So we retain that method. One thing I have experimented with in some projects is to split read operations from write operations. The IRepository interface represents something that is capable of both reading from the database and writing to database. Sometimes, we only need a capability to read from database, in which case, IRepository looks like an unnecessarily heavy object with capabilities we do not need. In such a situation, declaring a new capability to execute specification makes more sense. I would leave the actual code for this as a self-exercise for our readers. Specification chaining In the original implementation of specification pattern, chaining was simply a matter of carrying out logical AND between the outputs of the IsSatisfiedBy method on the specification objects involved in chaining. In case of NHibernate adopted version of specification pattern, the end result boils down to the same but actual implementation is slightly more complex than just ANDing the results. Similar to original specification pattern, we would need an abstract base class Specification<T> and a specialized AndSepcificatin<T> class. I would just skip these details. Let's go straight into the implementation of the IsSatisifed method on AndSpecification where actual logical ANDing happens. public override Expression<Func<T, bool>> IsSatisfied() { var p = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.AndAlso(          Expression.Invoke(specification1.IsSatisfied(), p),          Expression.Invoke(specification2.IsSatisfied(), p)), p); } Logical ANDing of two lambda expression is not a straightforward operation. We need to make use of static methods available on helper class System.Linq.Expressions.Expression. Let's try to go from inside out. That way it is easier to understand what is happening here. Following is the reproduction of innermost call to the Expression class: Expression.Invoke(specification1.IsSatisfied(), parameterName) In the preceding code, we are calling the Invoke method on the Expression class by passing the output of the IsSatisfied method on the first specification. Second parameter passed to this method is a temporary parameter of type T that we created to satisfy the method signature of Invoke. The Invoke method returns an InvocationExpression which represents the invocation of the lambda expression that was used to construct it. Note that actual lambda expression is not invoked yet. We do the same with second specification in question. Outputs of both these operations are then passed into another method on the Expression class as follows: Expression.AndAlso( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName) ) Expression.AndAlso takes the output from both specification objects in the form of InvocationExpression type and builds a special type called BinaryExpression which represents a logical AND between the two expressions that were passed to it. Next we convert this BinaryExpression into an Expression<Func<T, bool>> by passing it to the Expression.Lambda<Func<T, bool>> method. This explanation is not very easy to follow and if you have never used, built, or modified lambda expressions programmatically like this before, then you would find it very hard to follow. In that case, I would recommend not bothering yourself too much with this. Following code snippet shows how logical ORing of two specifications can be implemented. Note that the code snippet only shows the implementation of the IsSatisfied method. public override Expression<Func<T, bool>> IsSatisfied() { var parameterName = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.OrElse( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName)), parameterName); } Rest of the infrastructure around chaining is exactly same as the one presented during discussion of original specification pattern. I have avoided giving full class definitions here to save space but you can download the code to look at complete implementation. That brings us to end of specification pattern. Though specification pattern is a great leap forward from where repository left us, it does have some limitations of its own. Next, we would look into what these limitations are. Limitations Specification pattern is great and unlike repository pattern, I am not going to tell you that it has some downsides and you should try to avoid it. You should not. You should absolutely use it wherever it fits. I would only like to highlight two limitations of specification pattern. Specification pattern only works with lambda expressions. You cannot use LINQ syntax. There may be times when you would prefer LINQ syntax over lambda expressions. One such situation is when you want to go for theta joins which are not possible with lambda expressions. Another situation is when lambda expressions do not generate optimal SQL. I will show you a quick example to understand this better. Suppose we want to write a specification for employees who have opted for season ticket loan benefit. Following code listing shows how that specification could be written: public class EmployeeHavingTakenSeasonTicketLoanSepcification :Specification<Employee> { public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.Benefits.Any(b => b is SeasonTicketLoan); } } It is a very simple specification. Note the use of Any to iterate over the Benefits collection to check if any of the Benefit in that collection is of type SeasonTicketLoan. Following SQL is generated when the preceding specification is run: SELECT employee0_.Id           AS Id0_,        employee0_.Firstname     AS Firstname0_,        employee0_.Lastname     AS Lastname0_,        employee0_.EmailAddress AS EmailAdd5_0_,        employee0_.DateOfBirth   AS DateOfBi6_0_,        employee0_.DateOfJoining AS DateOfJo7_0_,        employee0_.IsAdmin       AS IsAdmin0_,        employee0_.Password     AS Password0_ FROM   Employee employee0_ WHERE EXISTS (SELECT benefits1_.Id FROM   Benefit benefits1_ LEFT OUTER JOIN Leave benefits1_1_ ON benefits1_.Id = benefits1_1_.Id LEFT OUTER JOIN SkillsEnhancementAllowance benefits1_2_ ON benefits1_.Id = benefits1_2_.Id LEFT OUTER JOIN SeasonTicketLoan benefits1_3_ ON benefits1_.Id = benefits1_3_.Id WHERE employee0_.Id = benefits1_.Employee_Id AND CASE WHEN benefits1_1_.Id IS NOT NULL THEN 1      WHEN benefits1_2_.Id IS NOT NULL THEN 2      WHEN benefits1_3_.Id IS NOT NULL THEN 3       WHEN benefits1_.Id IS NOT NULL THEN 0      END = 3) Isn't that SQL too complex? It is not only complex on your eyes but this is not how I would have written the needed SQL in absence of NHibernate. I would have just inner-joined the Employee, Benefit, and SeasonTicketLoan tables to get the records I need. On large databases, the preceding query may be too slow. There are some other such situations where queries written using lambda expressions tend to generate complex or not so optimal SQL. If we use LINQ syntax instead of lambda expressions, then we can get NHibernate to generate just the SQL. Unfortunately, there is no way of fixing this with specification pattern. Summary Repository pattern has been around for long time but suffers through some issues. General nature of its implementation comes in the way of extending repository pattern to use it with complex domain models involving large number of entities. Repository contract can be limiting and confusing when there is a need to write complex and very specific queries. Trying to fix these issues with repositories may result in leaky abstraction which can bite us later. Moreover, repositories maintained with less care have a tendency to grow into god objects and maintaining them beyond that point becomes a challenge. Specification pattern and query object pattern solve these issues on the read side of the things. Different applications have different data access requirements. Some applications are write-heavy while others are read-heavy. But there are a minute number of applications that fit into former category. A large number of applications developed these days are read-heavy. I have worked on applications that involved more than 90 percent database operations that queried data and only less than 10 percent operations that actually inserted/updated data into database. Having this knowledge about the application you are developing can be very useful in determining how you are going to design your data access layer. That brings use to the end of our NHibernate journey. Not quite, but yes, in a way. Resources for Article: Further resources on this subject: NHibernate 3: Creating a Sample Application [article] NHibernate 3.0: Using LINQ Specifications in the data access layer [article] NHibernate 2: Mapping relationships and Fluent Mapping [article]
Read more
  • 0
  • 0
  • 3462

article-image-matrix-and-pixel-manipulation-along-handling-files
Packt
11 Aug 2015
14 min read
Save for later

Matrix and Pixel Manipulation along with Handling Files

Packt
11 Aug 2015
14 min read
In this article, by Daniel Lélis Baggio, author of the book OpenCV 3.0 Computer Vision with Java, you will learn to perform basic operations required in computer vision, such as dealing with matrices, pixels, and opening files for prototype applications. In this article, the following topics will be covered: Basic matrix manipulation Pixel manipulation How to load and display images from files (For more resources related to this topic, see here.) Basic matrix manipulation From a computer vision background, we can see an image as a matrix of numerical values, which represents its pixels. For a gray-level image, we usually assign values ranging from 0 (black) to 255 (white) and the numbers in between show a mixture of both. These are generally 8-bit images. So, each element of the matrix refers to each pixel on the gray-level image, the number of columns refers to the image width, as well as the number of rows refers to the image's height. In order to represent a color image, we usually adopt each pixel as a combination of three basic colors: red, green, and blue. So, each pixel in the matrix is represented by a triplet of colors. It is important to observe that with 8 bits, we get 2 to the power of eight (28), which is 256. So, we can represent the range from 0 to 255, which includes, respectively the values used for black and white levels in 8-bit grayscale images. Besides this, we can also represent these levels as floating points and use 0.0 for black and 1.0 for white. OpenCV has a variety of ways to represent images, so you are able to customize the intensity level through the number of bits considering whether one wants signed, unsigned, or floating point data types, as well as the number of channels. OpenCV's convention is seen through the following expression: CV_<bit_depth>{U|S|F}C(<number_of_channels>) Here, U stands for unsigned, S for signed, and F stands for floating point. For instance, if an 8-bit unsigned single-channel image is required, the data type representation would be CV_8UC1, while a colored image represented by 32-bit floating point numbers would have the data type defined as CV_32FC3. If the number of channels is omitted, it evaluates to 1. We can see the ranges according to each bit depth and data type in the following list: CV_8U: These are the 8-bit unsigned integers that range from 0 to 255 CV_8S: These are the 8-bit signed integers that range from -128 to 127 CV_16U: These are the 16-bit unsigned integers that range from 0 to 65,535 CV_16S: These are the 16-bit signed integers that range from -32,768 to 32,767 CV_32S: These are the 32-bit signed integers that range from -2,147,483,648 to 2,147,483,647 CV_32F: These are the 32-bit floating-point numbers that range from -FLT_MAX to FLT_MAX and include INF and NAN values CV_64F: These are the 64-bit floating-point numbers that range from -DBL_MAX to DBL_MAX and include INF and NAN values You will generally start the project from loading an image, but it is important to know how to deal with these values. Make sure you import org.opencv.core.CvType and org.opencv.core.Mat. Several constructors are available for matrices as well, for instance: Mat image2 = new Mat(480,640,CvType.CV_8UC3); Mat image3 = new Mat(new Size(640,480), CvType.CV_8UC3); Both of the preceding constructors will construct a matrix suitable to fit an image with 640 pixels of width and 480 pixels of height. Note that width is to columns as height is to rows. Also pay attention to the constructor with the Size parameter, which expects the width and height order. In case you want to check some of the matrix properties, the methods rows(), cols(), and elemSize() are available: System.out.println(image2 + "rows " + image2.rows() + " cols " + image2.cols() + " elementsize " + image2.elemSize()); The output of the preceding line is: Mat [ 480*640*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0xceeec70, dataAddr=0xeb50090 ]rows 480 cols 640 elementsize 3 The isCont property tells us whether this matrix uses extra padding when representing the image, so that it can be hardware-accelerated in some platforms; however, we won't cover it in detail right now. The isSubmat property refers to fact whether this matrix was created from another matrix and also whether it refers to the data from another matrix. The nativeObj object refers to the native object address, which is a Java Native Interface (JNI) detail, while dataAddr points to an internal data address. The element size is measured in the number of bytes. Another matrix constructor is the one that passes a scalar to be filled as one of its elements. The syntax for this looks like the following: Mat image = new Mat(new Size(3,3), CvType.CV_8UC3, new Scalar(new double[]{128,3,4})); This constructor will initialize each element of the matrix with the triple {128, 3, 4}. A very useful way to print a matrix's contents is using the auxiliary method dump() from Mat. Its output will look similar to the following: [128, 3, 4, 128, 3, 4, 128, 3, 4; 128, 3, 4, 128, 3, 4, 128, 3, 4; 128, 3, 4, 128, 3, 4, 128, 3, 4] It is important to note that while creating the matrix with a specified size and type, it will also immediately allocate memory for its contents. Pixel manipulation Pixel manipulation is often required for one to access pixels in an image. There are several ways to do this and each one has its advantages and disadvantages. A straightforward method to do this is the put(row, col, value) method. For instance, in order to fill our preceding matrix with values {1, 2, 3}, we will use the following code: for(int i=0;i<image.rows();i++){ for(int j=0;j<image.cols();j++){    image.put(i, j, new byte[]{1,2,3}); } } Note that in the array of bytes {1, 2, 3}, for our matrix, 1 stands for the blue channel, 2 for the green, and 3 for the red channel, as OpenCV stores its matrix internally in the BGR (blue, green, and red) format. It is okay to access pixels this way for small matrices. The only problem is the overhead of JNI calls for big images. Remember that even a small 640 x 480 pixel image has 307,200 pixels and if we think about a colored image, it has 921,600 values in a matrix. Imagine that it might take around 50 ms to make an overloaded call for each of the 307,200 pixels. On the other hand, if we manipulate the whole matrix on the Java side and then copy it to the native side in a single call, it will take around 13 ms. If you want to manipulate the pixels on the Java side, perform the following steps: Allocate memory with the same size as the matrix in a byte array. Put the image contents into that array (optional). Manipulate the byte array contents. Make a single put call, copying the whole byte array to the matrix. A simple example that will iterate all image pixels and set the blue channel to zero, which means that we will set to zero every element whose modulo is 3 equals zero, that is {0, 3, 6, 9, …}, as shown in the following piece of code: public void filter(Mat image){ int totalBytes = (int)(image.total() * image.elemSize()); byte buffer[] = new byte[totalBytes]; image.get(0, 0,buffer); for(int i=0;i<totalBytes;i++){    if(i%3==0) buffer[i]=0; } image.put(0, 0, buffer); } First, we find out the number of bytes in the image by multiplying the total number of pixels (image.total) with the element size in bytes (image.elemenSize). Then, we build a byte array with that size. We use the get(row, col, byte[]) method to copy the matrix contents in our recently created byte array. Then, we iterate all bytes and check the condition that refers to the blue channel (i%3==0). Remember that OpenCV stores colors internally as {Blue, Green, Red}. We finally make another JNI call to image.put, which copies the whole byte array to OpenCV's native storage. An example of this filter can be seen in the following screenshot, which was uploaded by Mromanchenko, licensed under CC BY-SA 3.0: Be aware that Java does not have any unsigned byte data type, so be careful when working with it. The safe procedure is to cast it to an integer and use the And operator (&) with 0xff. A simple example of this would be int unsignedValue = myUnsignedByte & 0xff;. Now, unsignedValue can be checked in the range of 0 to 255. Loading and displaying images from files Most computer vision applications need to retrieve images from some where. In case you need to get them from files, OpenCV comes with several image file loaders. Unfortunately, some loaders depend on codecs that sometimes aren't shipped with the operating system, which might cause them not to load. From the documentation, we see that the following files are supported with some caveats: Windows bitmaps: *.bmp, *.dib JPEG files: *.jpeg, *.jpg, *.jpe JPEG 2000 files: *.jp2 Portable Network Graphics: *.png Portable image format: *.pbm, *.pgm, *.ppm Sun rasters: *.sr, *.ras TIFF files: *.tiff, *.tif Note that Windows bitmaps, the portable image format, and sun raster formats are supported by all platforms, but the other formats depend on a few details. In Microsoft Windows and Mac OS X, OpenCV can always read the jpeg, png, and tiff formats. In Linux, OpenCV will look for codecs supplied with the OS, as stated by the documentation, so remember to install the relevant packages (do not forget the development files, for example, "libjpeg-dev" in Debian* and Ubuntu*) to get the codec support or turn on the OPENCV_BUILD_3RDPARTY_LIBS flag in CMake, as pointed out in Imread's official documentation. The imread method is supplied to get access to images through files. Use Imgcodecs.imread (name of the file) and check whether dataAddr() from the read image is different from zero to make sure the image has been loaded correctly, that is, the filename has been typed correctly and its format is supported. A simple method to open a file could look like the one shown in the following code. Make sure you import org.opencv.imgcodecs.Imgcodecs and org.opencv.core.Mat: public Mat openFile(String fileName) throws Exception{ Mat newImage = Imgcodecs.imread(fileName);    if(newImage.dataAddr()==0){      throw new Exception ("Couldn't open file "+fileName);    } return newImage; } Displaying an image with Swing OpenCV developers are used to a simple cross-platform GUI by OpenCV, which was called as HighGUI, and a handy method called imshow. It constructs a window easily and displays an image within it, which is nice to create quick prototypes. As Java comes with a popular GUI API called Swing, we had better use it. Besides, no imshow method was available for Java until its 2.4.7.0 version was released. On the other hand, it is pretty simple to create such functionality. Let's break down the work in to two classes: App and ImageViewer. The App class will be responsible for loading the file, while ImageViewer will display it. The application's work is simple and will only need to use Imgcodecs's imread method, which is shown as follows: package org.javaopencvbook;   import java.io.File; … import org.opencv.imgcodecs.Imgcodecs;   public class App { static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }   public static void main(String[] args) throws Exception { String filePath = "src/main/resources/images/cathedral.jpg"; Mat newImage = Imgcodecs.imread(filePath); if(newImage.dataAddr()==0){    System.out.println("Couldn't open file " + filePath); } else{    ImageViewer imageViewer = new ImageViewer();    imageViewer.show(newImage, "Loaded image"); } } } Note that the App class will only read an example image file in the Mat object and it will call the ImageViewer method to display it. Now, let's see how the ImageViewer class's show method works: package org.javaopencvbook.util;   import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Image; import java.awt.image.BufferedImage;   import javax.swing.ImageIcon; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JScrollPane; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; import javax.swing.WindowConstants;   import org.opencv.core.Mat; import org.opencv.imgproc.Imgproc;   public class ImageViewer { private JLabel imageView; public void show(Mat image){    show(image, ""); }   public void show(Mat image,String windowName){    setSystemLookAndFeel();       JFrame frame = createJFrame(windowName);               Image loadedImage = toBufferedImage(image);        imageView.setIcon(new ImageIcon(loadedImage));               frame.pack();        frame.setLocationRelativeTo(null);        frame.setVisible(true);    }   private JFrame createJFrame(String windowName) {    JFrame frame = new JFrame(windowName);    imageView = new JLabel();    final JScrollPane imageScrollPane = new JScrollPane(imageView);        imageScrollPane.setPreferredSize(new Dimension(640, 480));        frame.add(imageScrollPane, BorderLayout.CENTER);        frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);    return frame; }   private void setSystemLookAndFeel() {    try {      UIManager.setLookAndFeel (UIManager.getSystemLookAndFeelClassName());    } catch (ClassNotFoundException e) {      e.printStackTrace();    } catch (InstantiationException e) {      e.printStackTrace();    } catch (IllegalAccessException e) {      e.printStackTrace();    } catch (UnsupportedLookAndFeelException e) {      e.printStackTrace();    } }   public Image toBufferedImage(Mat matrix){    int type = BufferedImage.TYPE_BYTE_GRAY;    if ( matrix.channels() > 1 ) {      type = BufferedImage.TYPE_3BYTE_BGR;    }    int bufferSize = matrix.channels()*matrix.cols()*matrix.rows();    byte [] buffer = new byte[bufferSize];    matrix.get(0,0,buffer); // get all the pixels    BufferedImage image = new BufferedImage(matrix.cols(),matrix.rows(), type);    final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();    System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);    return image; }   } Pay attention to the show and toBufferedImage methods. Show will try to set Swing's look and feel to the default native look, which is cosmetic. Then, it will create JFrame with JScrollPane and JLabel inside it. It will then call toBufferedImage, which will convert an OpenCV Mat object to a BufferedImage AWT. This conversion is made through the creation of a byte array that will store matrix contents. The appropriate size is allocated through the multiplication of the number of channels by the number of columns and rows. The matrix.get method puts all the elements into the byte array. Finally, the image's raster data buffer is accessed through the getDataBuffer() and getData() methods. It is then filled with a fast system call to the System.arraycopy method. The resulting image is then assigned to JLabel and then it is easily displayed. Note that this method expects a matrix that is either stored as one channel's unsigned 8-bit or three channel's unsigned 8-bit. In case your image is stored as a floating point, you should convert it using the following code before calling this method, supposing that the image you need to convert is a Mat object called originalImage: Mat byteImage = new Mat(); originalImage.convertTo(byteImage, CvType.CV_8UC3); This way, you can call toBufferedImage from your converted byteImage property. The image viewer can be easily installed in any Java OpenCV project and it will help you to show your images for debugging purposes. The output of this program can be seen in the next screenshot: Summary In this article, we learned dealing with matrices, pixels, and opening files for GUI prototype applications. Resources for Article: Further resources on this subject: Wrapping OpenCV [article] Making subtle color shifts with curves [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 16269

article-image-task-execution-asio
Packt
11 Aug 2015
20 min read
Save for later

Task Execution with Asio

Packt
11 Aug 2015
20 min read
In this article by Arindam Mukherjee, the author of Learning Boost C++ Libraries, we learch how to execute a task using Boost Asio (pronounced ay-see-oh), a portable library for performing efficient network I/O using a consistent programming model. At its core, Boost Asio provides a task execution framework that you can use to perform operations of any kind. You create your tasks as function objects and post them to a task queue maintained by Boost Asio. You enlist one or more threads to pick these tasks (function objects) and invoke them. The threads keep picking up tasks, one after the other till the task queues are empty at which point the threads do not block but exit. (For more resources related to this topic, see here.) IO Service, queues, and handlers At the heart of Asio is the type boost::asio::io_service. A program uses the io_service interface to perform network I/O and manage tasks. Any program that wants to use the Asio library creates at least one instance of io_service and sometimes more than one. In this section, we will explore the task management capabilities of io_service. Here is the IO Service in action using the obligatory "hello world" example: Listing 11.1: Asio Hello World 1 #include <boost/asio.hpp> 2 #include <iostream> 3 namespace asio = boost::asio; 4 5 int main() { 6   asio::io_service service; 7 8   service.post( 9     [] { 10       std::cout << "Hello, world!" << 'n'; 11     }); 12 13   std::cout << "Greetings: n"; 14   service.run(); 15 } We include the convenience header boost/asio.hpp, which includes most of the Asio library that we need for the examples in this aritcle (line 1). All parts of the Asio library are under the namespace boost::asio, so we use a shorter alias for this (line 3). The program itself just prints Hello, world! on the console but it does so through a task. The program first creates an instance of io_service (line 6) and posts a function object to it, using the post member function of io_service. The function object, in this case defined using a lambda expression, is referred to as a handler. The call to post adds the handler to a queue inside io_service; some thread (including that which posted the handler) must dispatch them, that is, remove them off the queue and call them. The call to the run member function of io_service (line 14) does precisely this. It loops through the handlers in the queue inside io_service, removing and calling each handler. In fact, we can post more handlers to the io_service before calling run, and it would call all the posted handlers. If we did not call run, none of the handlers would be dispatched. The run function blocks until all the handlers in the queue have been dispatched and returns only when the queue is empty. By itself, a handler may be thought of as an independent, packaged task, and Boost Asio provides a great mechanism for dispatching arbitrary tasks as handlers. Note that handlers must be nullary function objects, that is, they should take no arguments. Asio is a header-only library by default, but programs using Asio need to link at least with boost_system. On Linux, we can use the following command line to build this example: $ g++ -g listing11_1.cpp -o listing11_1 -lboost_system -std=c++11 Running this program prints the following: Greetings: Hello, World! Note that Greetings: is printed from the main function (line 13) before the call to run (line 14). The call to run ends up dispatching the sole handler in the queue, which prints Hello, World!. It is also possible for multiple threads to call run on the same I/O object and dispatch handlers concurrently. We will see how this can be useful in the next section. Handler states – run_one, poll, and poll_one While the run function blocks until there are no more handlers in the queue, there are other member functions of io_service that let you process handlers with greater flexibility. But before we look at this function, we need to distinguish between pending and ready handlers. The handlers we posted to the io_service were all ready to run immediately and were invoked as soon as their turn came on the queue. In general, handlers are associated with background tasks that run in the underlying OS, for example, network I/O tasks. Such handlers are meant to be invoked only once the associated task is completed, which is why in such contexts, they are called completion handlers. These handlers are said to be pending until the associated task is awaiting completion, and once the associated task completes, they are said to be ready. The poll member function, unlike run, dispatches all the ready handlers but does not wait for any pending handler to become ready. Thus, it returns immediately if there are no ready handlers, even if there are pending handlers. The poll_one member function dispatches exactly one ready handler if there be one, but does not block waiting for pending handlers to get ready. The run_one member function blocks on a nonempty queue waiting for a handler to become ready. It returns when called on an empty queue, and otherwise, as soon as it finds and dispatches a ready handler. post versus dispatch A call to the post member function adds a handler to the task queue and returns immediately. A later call to run is responsible for dispatching the handler. There is another member function called dispatch that can be used to request the io_service to invoke a handler immediately if possible. If dispatch is invoked in a thread that has already called one of run, poll, run_one, or poll_one, then the handler will be invoked immediately. If no such thread is available, dispatch adds the handler to the queue and returns just like post would. In the following example, we invoke dispatch from the main function and from within another handler: Listing 11.2: post versus dispatch 1 #include <boost/asio.hpp> 2 #include <iostream> 3 namespace asio = boost::asio; 4 5 int main() { 6   asio::io_service service; 7   // Hello Handler – dispatch behaves like post 8   service.dispatch([]() { std::cout << "Hellon"; }); 9 10   service.post( 11     [&service] { // English Handler 12       std::cout << "Hello, world!n"; 13       service.dispatch([] { // Spanish Handler, immediate 14                         std::cout << "Hola, mundo!n"; 15                       }); 16     }); 17   // German Handler 18   service.post([&service] {std::cout << "Hallo, Welt!n"; }); 19   service.run(); 20 } Running this code produces the following output: Hello Hello, world! Hola, mundo! Hallo, Welt! The first call to dispatch (line 8) adds a handler to the queue without invoking it because run was yet to be called on io_service. We call this the Hello Handler, as it prints Hello. This is followed by the two calls to post (lines 10, 18), which add two more handlers. The first of these two handlers prints Hello, world! (line 12), and in turn, calls dispatch (line 13) to add another handler that prints the Spanish greeting, Hola, mundo! (line 14). The second of these handlers prints the German greeting, Hallo, Welt (line 18). For our convenience, let's just call them the English, Spanish, and German handlers. This creates the following entries in the queue: Hello Handler English Handler German Handler Now, when we call run on the io_service (line 19), the Hello Handler is dispatched first and prints Hello. This is followed by the English Handler, which prints Hello, World! and calls dispatch on the io_service, passing the Spanish Handler. Since this executes in the context of a thread that has already called run, the call to dispatch invokes the Spanish Handler, which prints Hola, mundo!. Following this, the German Handler is dispatched printing Hallo, Welt! before run returns. What if the English Handler called post instead of dispatch (line 13)? In that case, the Spanish Handler would not be invoked immediately but would queue up after the German Handler. The German greeting Hallo, Welt! would precede the Spanish greeting Hola, mundo!. The output would look like this: Hello Hello, world! Hallo, Welt! Hola, mundo! Concurrent execution via thread pools The io_service object is thread-safe and multiple threads can call run on it concurrently. If there are multiple handlers in the queue, they can be processed concurrently by such threads. In effect, the set of threads that call run on a given io_service form a thread pool. Successive handlers can be processed by different threads in the pool. Which thread dispatches a given handler is indeterminate, so the handler code should not make any such assumptions. In the following example, we post a bunch of handlers to the io_service and then start four threads, which all call run on it: Listing 11.3: Simple thread pools 1 #include <boost/asio.hpp> 2 #include <boost/thread.hpp> 3 #include <boost/date_time.hpp> 4 #include <iostream> 5 namespace asio = boost::asio; 6 7 #define PRINT_ARGS(msg) do { 8   boost::lock_guard<boost::mutex> lg(mtx); 9   std::cout << '[' << boost::this_thread::get_id() 10             << "] " << msg << std::endl; 11 } while (0) 12 13 int main() { 14   asio::io_service service; 15   boost::mutex mtx; 16 17   for (int i = 0; i < 20; ++i) { 18     service.post([i, &mtx]() { 19                         PRINT_ARGS("Handler[" << i << "]"); 20                         boost::this_thread::sleep( 21                               boost::posix_time::seconds(1)); 22                       }); 23   } 24 25   boost::thread_group pool; 26   for (int i = 0; i < 4; ++i) { 27     pool.create_thread([&service]() { service.run(); }); 28   } 29 30   pool.join_all(); 31 } We post twenty handlers in a loop (line 18). Each handler prints its identifier (line 19), and then sleeps for a second (lines 19-20). To run the handlers, we create a group of four threads, each of which calls run on the io_service (line 21) and wait for all the threads to finish (line 24). We define the macro PRINT_ARGS which writes output to the console in a thread-safe way, tagged with the current thread ID (line 7-10). We will use this macro in other examples too. To build this example, you must also link against libboost_thread, libboost_date_time, and in Posix environments, with libpthread too: $ g++ -g listing9_3.cpp -o listing9_3 -lboost_system -lboost_thread -lboost_date_time -pthread -std=c++11 One particular run of this program on my laptop produced the following output (with some lines snipped): [b5c15b40] Handler[0] [b6416b40] Handler[1] [b6c17b40] Handler[2] [b7418b40] Handler[3] [b5c15b40] Handler[4] [b6416b40] Handler[5] … [b6c17b40] Handler[13] [b7418b40] Handler[14] [b6416b40] Handler[15] [b5c15b40] Handler[16] [b6c17b40] Handler[17] [b7418b40] Handler[18] [b6416b40] Handler[19] You can see that the different handlers are executed by different threads (each thread ID marked differently). If any of the handlers threw an exception, it would be propagated across the call to the run function on the thread that was executing the handler. io_service::work Sometimes, it is useful to keep the thread pool started, even when there are no handlers to dispatch. Neither run nor run_one blocks on an empty queue. So in order for them to block waiting for a task, we have to indicate, in some way, that there is outstanding work to be performed. We do this by creating an instance of io_service::work, as shown in the following example: Listing 11.4: Using io_service::work to keep threads engaged 1 #include <boost/asio.hpp> 2 #include <memory> 3 #include <boost/thread.hpp> 4 #include <iostream> 5 namespace asio = boost::asio; 6 7 typedef std::unique_ptr<asio::io_service::work> work_ptr; 8 9 #define PRINT_ARGS(msg) do { … ... 14 15 int main() { 16   asio::io_service service; 17   // keep the workers occupied 18   work_ptr work(new asio::io_service::work(service)); 19   boost::mutex mtx; 20 21   // set up the worker threads in a thread group 22   boost::thread_group workers; 23   for (int i = 0; i < 3; ++i) { 24     workers.create_thread([&service, &mtx]() { 25                         PRINT_ARGS("Starting worker thread "); 26                         service.run(); 27                         PRINT_ARGS("Worker thread done"); 28                       }); 29   } 30 31   // Post work 32   for (int i = 0; i < 20; ++i) { 33     service.post( 34       [&service, &mtx]() { 35         PRINT_ARGS("Hello, world!"); 36         service.post([&mtx]() { 37                           PRINT_ARGS("Hola, mundo!"); 38                         }); 39       }); 40   } 41 42 work.reset(); // destroy work object: signals end of work 43   workers.join_all(); // wait for all worker threads to finish 44 } In this example, we create an object of io_service::work wrapped in a unique_ptr (line 18). We associate it with an io_service object by passing to the work constructor a reference to the io_service object. Note that, unlike listing 11.3, we create the worker threads first (lines 24-27) and then post the handlers (lines 33-39). However, the worker threads stay put waiting for the handlers because of the calls to run block (line 26). This happens because of the io_service::work object we created, which indicates that there is outstanding work in the io_service queue. As a result, even after all handlers are dispatched, the threads do not exit. By calling reset on the unique_ptr, wrapping the work object, its destructor is called, which notifies the io_service that all outstanding work is complete (line 42). The calls to run in the threads return and the program exits once all the threads are joined (line 43). We wrapped the work object in a unique_ptr to destroy it in an exception-safe way at a suitable point in the program. We omitted the definition of PRINT_ARGS here, refer to listing 11.3. Serialized and ordered execution via strands Thread pools allow handlers to be run concurrently. This means that handlers that access shared resources need to synchronize access to these resources. We already saw examples of this in listings 11.3 and 11.4, when we synchronized access to std::cout, which is a global object. As an alternative to writing synchronization code in handlers, which can make the handler code more complex, we can use strands. Think of a strand as a subsequence of the task queue with the constraint that no two handlers from the same strand ever run concurrently. The scheduling of other handlers in the queue, which are not in the strand, is not affected by the strand in any way. Let us look at an example of using strands: Listing 11.5: Using strands 1 #include <boost/asio.hpp> 2 #include <boost/thread.hpp> 3 #include <boost/date_time.hpp> 4 #include <cstdlib> 5 #include <iostream> 6 #include <ctime> 7 namespace asio = boost::asio; 8 #define PRINT_ARGS(msg) do { ... 13 14 int main() { 15   std::srand(std::time(0)); 16 asio::io_service service; 17   asio::io_service::strand strand(service); 18   boost::mutex mtx; 19   size_t regular = 0, on_strand = 0; 20 21 auto workFuncStrand = [&mtx, &on_strand] { 22           ++on_strand; 23           PRINT_ARGS(on_strand << ". Hello, from strand!"); 24           boost::this_thread::sleep( 25                       boost::posix_time::seconds(2)); 26         }; 27 28   auto workFunc = [&mtx, &regular] { 29                   PRINT_ARGS(++regular << ". Hello, world!"); 30                  boost::this_thread::sleep( 31                         boost::posix_time::seconds(2)); 32                 }; 33   // Post work 34   for (int i = 0; i < 15; ++i) { 35     if (rand() % 2 == 0) { 36       service.post(strand.wrap(workFuncStrand)); 37     } else { 38       service.post(workFunc); 39     } 40   } 41 42   // set up the worker threads in a thread group 43   boost::thread_group workers; 44   for (int i = 0; i < 3; ++i) { 45     workers.create_thread([&service, &mtx]() { 46                      PRINT_ARGS("Starting worker thread "); 47                       service.run(); 48                       PRINT_ARGS("Worker thread done"); 49                     }); 50   } 51 52   workers.join_all(); // wait for all worker threads to finish 53 } In this example, we create two handler functions: workFuncStrand (line 21) and workFunc (line 28). The lambda workFuncStrand captures a counter on_strand, increments it, and prints a message Hello, from strand!, prefixed with the value of the counter. The function workFunc captures another counter regular, increments it, and prints Hello, World!, prefixed with the counter. Both pause for 2 seconds before returning. To define and use a strand, we first create an object of io_service::strand associated with the io_service instance (line 17). Thereafter, we post all handlers that we want to be part of that strand by wrapping them using the wrap member function of the strand (line 36). Alternatively, we can post the handlers to the strand directly by using either the post or the dispatch member function of the strand, as shown in the following snippet: 33   for (int i = 0; i < 15; ++i) { 34     if (rand() % 2 == 0) { 35       strand.post(workFuncStrand); 37     } else { ... The wrap member function of strand returns a function object, which in turn calls dispatch on the strand to invoke the original handler. Initially, it is this function object rather than our original handler that is added to the queue. When duly dispatched, this invokes the original handler. There are no constraints on the order in which these wrapper handlers are dispatched, and therefore, the actual order in which the original handlers are invoked can be different from the order in which they were wrapped and posted. On the other hand, calling post or dispatch directly on the strand avoids an intermediate handler. Directly posting to a strand also guarantees that the handlers will be dispatched in the same order that they were posted, achieving a deterministic ordering of the handlers in the strand. The dispatch member of strand blocks until the handler is dispatched. The post member simply adds it to the strand and returns. Note that workFuncStrand increments on_strand without synchronization (line 22), while workFunc increments the counter regular within the PRINT_ARGS macro (line 29), which ensures that the increment happens in a critical section. The workFuncStrand handlers are posted to a strand and therefore are guaranteed to be serialized; hence no need for explicit synchronization. On the flip side, entire functions are serialized via strands and synchronizing smaller blocks of code is not possible. There is no serialization between the handlers running on the strand and other handlers; therefore, the access to global objects, like std::cout, must still be synchronized. The following is a sample output of running the preceding code: [b73b6b40] Starting worker thread [b73b6b40] 0. Hello, world from strand! [b6bb5b40] Starting worker thread [b6bb5b40] 1. Hello, world! [b63b4b40] Starting worker thread [b63b4b40] 2. Hello, world! [b73b6b40] 3. Hello, world from strand! [b6bb5b40] 5. Hello, world! [b63b4b40] 6. Hello, world! … [b6bb5b40] 14. Hello, world! [b63b4b40] 4. Hello, world from strand! [b63b4b40] 8. Hello, world from strand! [b63b4b40] 10. Hello, world from strand! [b63b4b40] 13. Hello, world from strand! [b6bb5b40] Worker thread done [b73b6b40] Worker thread done [b63b4b40] Worker thread done There were three distinct threads in the pool and the handlers from the strand were picked up by two of these three threads: initially, by thread ID b73b6b40, and later on, by thread ID b63b4b40. This also dispels a frequent misunderstanding that all handlers in a strand are dispatched by the same thread, which is clearly not the case. Different handlers in the same strand may be dispatched by different threads but will never run concurrently. Summary Asio is a well-designed library that can be used to write fast, nimble network servers that utilize the most optimal mechanisms for asynchronous I/O available on a system. It is an evolving library and is the basis for a Technical Specification that proposes to add a networking library to a future revision of the C++ Standard. In this article, we learned how to use the Boost Asio library as a task queue manager and leverage Asio's TCP and UDP interfaces to write programs that communicate over the network. Resources for Article: Further resources on this subject: Animation features in Unity 5 [article] Exploring and Interacting with Materials using Blueprints [article] A Simple Pathfinding Algorithm for a Maze [article]
Read more
  • 0
  • 1
  • 22134

article-image-creating-functions-and-operations
Packt
10 Aug 2015
18 min read
Save for later

Creating Functions and Operations

Packt
10 Aug 2015
18 min read
In this article by Alex Libby, author of the book Sass Essentials, we will learn how to use operators or functions to construct a whole site theme from just a handful of colors, or defining font sizes for the entire site from a single value. You will learn how to do all these things in this article. Okay, so let's get started! (For more resources related to this topic, see here.) Creating values using functions and operators Imagine a scenario where you're creating a masterpiece that has taken days to put together, with a stunning choice of colors that has taken almost as long as the building of the project and yet, the client isn't happy with the color choice. What to do? At this point, I'm sure that while you're all smiles to the customer, you'd be quietly cursing the amount of work they've just landed you with, this late on a Friday. Sound familiar? I'll bet you scrap the colors and go back to poring over lots of color combinations, right? It'll work, but it will surely take a lot more time and effort. There's a better way to achieve this; instead of creating or choosing lots of different colors, we only need to choose one and create all of the others automatically. How? Easy! When working with Sass, we can use a little bit of simple math to build our color palette. One of the key tenets of Sass is its ability to work out values dynamically, using nothing more than a little simple math; we could define font sizes from H1 to H6 automatically, create new shades of colors, or even work out the right percentages to use when creating responsive sites! We will take a look at each of these examples throughout the article, but for now, let's focus on the principles of creating our colors using Sass. Creating colors using functions We can use simple math and functions to create just about any type of value, but colors are where these two really come into their own. The great thing about Sass is that we can work out the hex value for just about any color we want to, from a limited range of colors. This can easily be done using techniques such as adding two values together, or subtracting one value from another. To get a feel of how the color operators work, head over to the official documentation at http://sass-lang.com/documentation/file.SASS_REFERENCE.html#color_operations—it is worth reading! Nothing wrong with adding or subtracting values—it's a perfectly valid option, and will result in a valid hex code when compiled. But would you know that both values are actually deep shades of blue? Therein lies the benefit of using functions; instead of using math operators, we can simply say this: p { color: darken(#010203, 10%); } This, I am sure you will agree, is easier to understand as well as being infinitely more readable! The use of functions opens up a world of opportunities for us. We can use any one of the array of functions such as lighten(), darken(), mix(), or adjust-hue() to get a feel of how easy it is to get the values. If we head over to http://jackiebalzer.com/color, we can see that the author has exploded a number of Sass (and Compass—we will use this later) functions, so we can see what colors are displayed, along with their numerical values, as soon as we change the initial two values. Okay, we could play with the site ad infinitum, but I feel a demo coming on—to explore the effects of using the color functions to generate new colors. Let's construct a simple demo. For this exercise, we will dig up a copy of the colorvariables demo and modify it so that we're only assigning one color variable, not six. For this exercise, I will assume you are using Koala to compile the code. Okay, let's make a start: We'll start with opening up a copy of colorvariables.scss in your favorite text editor and removing lines 1 to 15 from the start of the file. Next, add the following lines, so that we should be left with this at the start of the file: $darkRed: #a43; $white: #fff; $black: #000;   $colorBox1: $darkRed; $colorBox2: lighten($darkRed, 30%); $colorBox3: adjust-hue($darkRed, 35%); $colorBox4: complement($darkRed); $colorBox5: saturate($darkRed, 30%); $colorBox6: adjust-color($darkRed, $green: 25); Save the file as colorfunctions.scss. We need a copy of the markup file to go with this code, so go ahead and extract a copy of colorvariables.html from the code download, saving it as colorfunctions.html in the root of our project area. Don't forget to change the link for the CSS file within to colorfunctions.css! Fire up Koala, then drag and drop colorfunctions.scss from our project area over the main part of the application window to add it to the list: Right-click on the file name and select Compile, and then wait for it to show Success in a green information box. If we preview the results of our work in a browser, we should see the following boxes appear: At this point, we have a working set of colors—granted, we might have to work a little on making sure that they all work together. But the key point here is that we have only specified one color, and that the others are all calculated automatically through Sass. Now that we are only defining one color by default, how easy is it to change the colors in our code? Well, it is a cinch to do so. Let's try it out using the help of the SassMeister playground. Changing the colors in use We can easily change the values used in the code, and continue to refresh the browser after each change. However, this isn't a quick way to figure out which colors work; to get a quicker response, there is an easier way: use the online Sass playground at http://www.sassmeister.com. This is the perfect way to try out different colors—the site automatically recompiles the code and updates the result as soon as we make a change. Try copying the HTML and SCSS code into the play area to view the result. The following screenshot shows the same code used in our demo, ready for us to try using different calculations: All images work on the principle that we take a base color (in this case, $dark-blue, or #a43), then adjust the color either by a percentage or a numeric value. When compiled, Sass calculates what the new value should be and uses this in the CSS. Take, for example, the color used for #box6, which is a dark orange with a brown tone, as shown in this screenshot: To get a feel of some of the functions that we can use to create new colors (or shades of existing colors), take a look at the main documentation at http://sass-lang.com/documentation/Sass/Script/Functions.html, or https://www.makerscabin.com/web/sass/learn/colors. These sites list a variety of different functions that we can use to create our masterpiece. We can also extend the functions that we have in Sass with the help of custom functions, such as the toolbox available at https://github.com/at-import/color-schemer—this may be worth a look. In our demo, we used a dark red color as our base. If we're ever stuck for ideas on colors, or want to get the right HEX, RGB(A), or even HSL(A) codes, then there are dozens of sites online that will give us these values. Here are a couple of them that you can try: HSLa Explorer, by Chris Coyier—this is available at https://css-tricks.com/examples/HSLaExplorer/. HSL Color Picker by Brandon Mathis—this is available at http://hslpicker.com/. If we know the name, but want to get a Sass value, then we can always try the list of 1,500+ colors at https://github.com/FearMediocrity/sass-color-palettes/blob/master/colors.scss. What's more, the list can easily be imported into our CSS, although it would make better sense to simply copy the chosen values into our Sass file, and compile from there instead. Mixing colors The one thing that we've not discussed, but is equally useful is that we are not limited to using functions on their own; we can mix and match any number of functions to produce our colors. A great way to choose colors, and get the appropriate blend of functions to use, is at http://sassme.arc90.com/. Using the available sliders, we can choose our color, and get the appropriate functions to use in our Sass code. The following image shows how: In most cases, we will likely only need to use two functions (a mix of darken and adjust hue, for example); if we are using more than two–three functions, then we should perhaps rethink our approach! In this case, a better alternative is to use Sass's mix() function, as follows: $white: #fff; $berry: hsl(267, 100%, 35%); p { mix($white, $berry, 0.7) } …which will give the following valid CSS: p { color: #5101b3; } This is a useful alternative to use in place of the command we've just touched on; after all, would you understand what adjust_hue(desaturate(darken(#db4e29, 2), 41), 67) would give as a color? Granted, it is something of an extreme calculation, nonetheless, it is technically valid. If we use mix() instead, it matches more closely to what we might do, for example, when mixing paint. After all, how else would we lighten its color, if not by adding a light-colored paint? Okay, let's move on. What's next? I hear you ask. Well, so far we've used core Sass for all our functions, but it's time to go a little further afield. Let's take a look at how you can use external libraries to add extra functionality. In our next demo, we're going to introduce using Compass, which you will often see being used with Sass. Using an external library So far, we've looked at using core Sass functions to produce our colors—nothing wrong with this; the question is, can we take things a step further? Absolutely, once we've gained some experience with using these functions, we can introduce custom functions (or helpers) that expand what we can do. A great library for this purpose is Compass, available at http://www.compass-style.org; we'll make use of this to change the colors which we created from our earlier boxes demo, in the section, Creating colors using functions. Compass is a CSS authoring framework, which provides extra mixins and reusable patterns to add extra functionality to Sass. In our demo, we're using shade(), which is one of the several color helpers provided by the Compass library. Let's make a start: We're using Compass in this demo, so we'll begin with installing the library. To do this, fire up Command Prompt, then navigate to our project area. We need to make sure that our installation RubyGems system software is up to date, so at Command Prompt, enter the following, and then press Enter: gem update --system Next, we're installing Compass itself—at the prompt, enter this command, and then press Enter: gem install compass Compass works best when we get it to create a project shell (or template) for us. To do this, first browse to http://www.compass-style.org/install, and then enter the following in the Tell us about your project… area: Leave anything in grey text as blank. This produces the following commands—enter each at Command Prompt, pressing Enter each time: Navigate back to Command Prompt. We need to compile our SCSS code, so go ahead and enter this command at the prompt (or copy and paste it), then press Enter: compass watch –sourcemap Next, extract a copy of the colorlibrary folder from the code download, and save it to the project area. In colorlibrary.scss, comment out the existing line for $backgrd_box6_color, and add the following immediately below it: $backgrd_box6_color: shade($backgrd_box5_color, 25%); Save the changes to colorlibrary.scss. If all is well, Compass's watch facility should kick in and recompile the code automatically. To verify that this has been done, look in the css subfolder of the colorlibrary folder, and you should see both the compiled CSS and the source map files present. If you find Compass compiles files in unexpected folders, then try using the following command to specify the source and destination folders when compiling: compass watch --sass-dir sass --css-dir css If all is well, we will see the boxes, when previewing the results in a browser window, as in the following image. Notice how Box 6 has gone a nice shade of deep red (if not almost brown)? To really confirm that all the changes have taken place as required, we can fire up a DOM inspector such as Firebug; a quick check confirms that the color has indeed changed: If we explore even further, we can see that the compiled code shows that the original line for Box 6 has been commented out, and that we're using the new function from the Compass helper library: This is a great way to push the boundaries of what we can do when creating colors. To learn more about using the Compass helper functions, it's worth exploring the official documentation at http://compass-style.org/reference/compass/helpers/colors/. We used the shade() function in our code, which darkens the color used. There is a key difference to using something such as darken() to perform the same change. To get a feel of the difference, take a look at the article on the CreativeBloq website at http://www.creativebloq.com/css3/colour-theming-sass-and-compass-6135593, which explains the difference very well. The documentation is a little lacking in terms of how to use the color helpers; the key is not to treat them as if they were normal mixins or functions, but to simply reference them in our code. To explore more on how to use these functions, take a look at the article by Antti Hiljá at http://clubmate.fi/how-to-use-the-compass-helper-functions/. We can, of course, create mixins to create palettes—for a more complex example, take a look at http://www.zingdesign.com/how-to-generate-a-colour-palette-with-compass/ to understand how such a mixin can be created using Compass. Okay, let's move on. So far, we've talked about using functions to manipulate colors; the flip side is that we are likely to use operators to manipulate values such as font sizes. For now, let's change tack and take a look at creating new values for changing font sizes. Changing font sizes using operators We already talked about using functions to create practically any value. Well, we've seen how to do it with colors; we can apply similar principles to creating font sizes too. In this case, we set a base font size (in the same way that we set a base color), and then simply increase or decrease font sizes as desired. In this instance, we won't use functions, but instead, use standard math operators, such as add, subtract, or divide. When working with these operators, there are a couple of points to remember: Sass math functions preserve units—this means we can't work on numbers with different units, such as adding a px value to a rem value, but can work with numbers that can be converted to the same format, such as inches to centimeters If we multiply two values with the same units, then this will produce square units (that is, 10px * 10px == 100px * px). At the same time, px * px will throw an error as it is an invalid unit in CSS. There are some quirks when working with / as a division operator —in most instances, it is normally used to separate two values, such as defining a pair of font size values. However, if the value is surrounded in parentheses, used as a part of another arithmetic expression, or is stored in a variable, then this will be treated as a division operator. For full details, it is worth reading the relevant section in the official documentation at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#division-and-slash. With these in mind, let's create a simple demo—a perfect use for Sass is to automatically work out sizes from H1 through to H6. We could just do this in a simple text editor, but this time, let's break with tradition and build our demo directly into a session on http://www.sassmeister.com. We can then play around with the values set, and see the effects of the changes immediately. If we're happy with the results of our work, we can copy the final version into a text editor and save them as standard SCSS (or CSS) files. Let's begin by browsing to http://www.sassmeister.com, and adding the following HTML markup window: <html> <head>    <meta charset="utf-8" />    <title>Demo: Assigning colors using variables</title>    <link rel="stylesheet" type="text/css" href="css/     colorvariables.css"> </head> <body>    <h1>The cat sat on the mat</h1>    <h2>The cat sat on the mat</h2>    <h3>The cat sat on the mat</h3>    <h4>The cat sat on the mat</h4>    <h5>The cat sat on the mat</h5>    <h6>The cat sat on the mat</h6> </body> </html> Next, add the following to the SCSS window—we first set a base value of 3.0, followed by a starting color of #b26d61, or a dark, moderate red: $baseSize: 3.0; $baseColor: #b26d61; We need to add our H1 to H6 styles. The rem mixin was created by Chris Coyier, at https://css-tricks.com/snippets/css/less-mixin-for-rem-font-sizing/. We first set the font size, followed by setting the font color, using either the base color set earlier, or a function to produce a different shade: h1 { font-size: $baseSize; color: $baseColor; }   h2 { font-size: ($baseSize - 0.2); color: darken($baseColor, 20%); }   h3 { font-size: ($baseSize - 0.4); color: lighten($baseColor, 10%); }   h4 { font-size: ($baseSize - 0.6); color: saturate($baseColor, 20%); }   h5 { font-size: ($baseSize - 0.8); color: $baseColor - 111; }   h6 { font-size: ($baseSize - 1.0); color: rgb(red($baseColor) + 10, 23, 145); } SassMeister will automatically compile the code to produce a valid CSS, as shown in this screenshot: Try changing the base size of 3.0 to a different value—using http://www.sassmeister.com, we can instantly see how this affects the overall size of each H value. Note how we're multiplying the base variable by 10 to set the pixel value, or simply using the value passed to render each heading. In each instance, we can concatenate the appropriate unit using a plus (+) symbol. We then subtract an increasing value from $baseSize, before using this value as the font size for the relevant H value. You can see a similar example of this by Andy Baudoin as a CodePen, at http://codepen.io/baudoin/pen/HdliD/. He makes good use of nesting to display the color and strength of shade. Note that it uses a little JavaScript to add the text of the color that each line represents, and can be ignored; it does not affect the Sass used in the demo. The great thing about using a site such SassMeister is that we can play around with values and immediately see the results. For more details on using number operations in Sass, browse to the official documentation, which is at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#number_operations. Okay, onwards we go. Let's turn our attention to creating something a little more substantial; we're going to create a complete site theme using the power of Sass and a few simple calculations. Summary Phew! What a tour! One of the key concepts of Sass is the use of functions and operators to create values, so let's take a moment to recap what we have covered throughout this article. We kicked off with a look at creating color values using functions, before discovering how we can mix and match different functions to create different shades, or using external libraries to add extra functionality to Sass. We then moved on to take a look at another key use of functions, with a look at defining different font sizes, using standard math operators. Resources for Article: Further resources on this subject: Nesting, Extend, Placeholders, and Mixins [article] Implementation of SASS [article] Constructing Common UI Widgets [article]
Read more
  • 0
  • 0
  • 4138

article-image-share-and-share-alike
Packt
10 Aug 2015
13 min read
Save for later

Share and Share Alike

Packt
10 Aug 2015
13 min read
In this article by Kevin Harvey, author of the book Test-Driven Development with Django, we'll expose the data in our application via a REST API. As we do, we'll learn: The importance of documentation in the API development process How to write functional tests for API endpoints API patterns and best practices (For more resources related to this topic, see here.) It's an API world, we're just coding in it It's very common nowadays to include a public REST API in your web project. Exposing your services or data to the world is generally done for one of two reasons: You've got interesting data, and other developers might want to integrate that information into a project they're working on You're building a secondary system that you expect your users to interact with, and that system needs to interact with your data (that is, a mobile or desktop app, or an AJAX-driven front end) We've got both reasons in our application. We're housing novel, interesting data in our database that someone might want to access programmatically. Also, it would make sense to build a desktop application that could interact with a user's own digital music collection so they could actually hear the solos we're storing in our system. Deceptive simplicity The good news is that there are some great options for third-party plugins for Django that allow you to build a REST API into an existing application. The bad news is that the simplicity of adding one of these packages can let you go off half-cocked, throwing an API on top of your project without a real plan for it. If you're lucky, you'll just wind up with a bird's nest of an API: inconsistent URLs, wildly varying payloads, and difficult authentication. In the worst-case scenario, your bolt-on API exposes data you didn't intend to make public and wind up with a self-inflicted security issue. Never forget that an API is sort of invisible. Unlike traditional web pages, where bugs are very public and easy to describe, API bugs are only visible to other developers. Take special care to make sure your API behaves exactly as intended by writing thorough documentation and tests to make sure you've implemented it correctly. Writing documentation first "Documentation is king." - Kenneth Reitz If you've spent any time at all working with Python or Django, you know what good documentation looks like. The Django folks in particular seem to understand this well: the key to getting developers to use your code is great documentation. In documenting an API, be explicit. Most of your API methods' docs should take the form of "if you send this, you will get back this", with real-world examples of input and output. A great side effect of prewriting documentation is that it makes the intention of your API crystal clear. You're allowing yourself to conjure up the API from thin air without getting bogged down in any of the details, so you can get a bird's-eye view of what you're trying to accomplish. Your documentation will keep you oriented throughout the development process. Documentation-Driven testing Once you've got your documentation done, testing is simply a matter of writing test cases that match up with what you've promised. The actions of the test methods exercise HTTP methods, and your assertions check the responses. Test-Driven Development really shines when it comes to API development. There are great tools for sending JSON over the wire, but properly formatting JSON can be a pain, and reading it can be worse. Enshrining test JSON in test methods and asserting they match the real responses will save you a ton of headache. More developers, more problems Good documentation and test coverage are exponentially more important when two groups are developing in tandem—one on the client application and one on the API. Changes to an API are hard for teams like this to deal with, and should come with a lot of warning (and apologies). If you have to make a change to an endpoint, it should break a lot of tests, and you should methodically go and fix them all. What's more, no one feels the pain of regression bugs like the developer of an API-consuming client. You really, really, really need to know that all the endpoints you've put out there are still going to work when you add features or refactor. Building an API with Django REST framework Now that you're properly terrified of developing an API, let's get started. What sort of capabilities should we add? Here are a couple possibilities: Exposing the Album, Track, and Solo information we have Creating new Solos or updating existing ones Initial documentation In the Python world it's very common for documentation to live in docstrings, as it keeps the description of how to use an object close to the implementation. We'll eventually do the same with our docs, but it's kind of hard to write a docstring for a method that doesn't exist yet. Let's open up a new Markdown file API.md, right in the root of the project, just to get us started. If you've never used Markdown before, you can read an introduction to GitHub's version of Markdown at https://help.github.com/articles/markdown-basics/. Here's a sample of what should go in API.md. Have a look at https://github.com/kevinharvey/jmad/blob/master/API.md for the full, rendered version. ...# Get a Track with Solos* URL: /api/tracks/<pk>/* HTTP Method: GET## Example Response{"name": "All Blues","slug": "all-blues","album": {"name": "Kind of Blue","url": "http://jmad.us/api/albums/2/"},"solos": [{"artist": "Cannonball Adderley","instrument": "saxophone","start_time": "4:05","end_time": "6:04","slug": "cannonball-adderley","url": "http://jmad.us/api/solos/281/"},...]}# Add a Solo to a Track* URL: /api/solos/* HTTP Method: POST## Example Request{"track": "/api/tracks/83/","artist": "Don Cherry","instrument": "cornet","start_time": "2:13","end_time": "3:54"}## Example Response{"url": "http://jmad.us/api/solos/64/","artist": "Don Cherry","slug": "don-cherry","instrument": "cornet","start_time": "2:13","end_time": "3:54","track": "http://jmad.us/api/tracks/83/"} There's not a lot of prose, and there needn't be. All we're trying to do is layout the ins and outs of our API. It's important at this point to step back and have a look at the endpoints in their totality. Is there enough of a pattern that you can sort of guess what the next one is going to look like? Does it look like a fairly straightforward API to interact with? Does anything about it feel clunky? Would you want to work with this API by yourself? Take time to think through any weirdness now before anything gets out in the wild. $ git commit -am 'Initial API Documentation'$ git tag -a ch7-1-init-api-docs Introducing Django REST framework Now that we've got some idea what we're building, let's actually get it going. We'll be using Django REST Framework (http://www.django-rest-framework.org/). Start by installing it in your environment: $ pip install djangorestframework Add rest_framework to your INSTALLED_APPS in jmad/settings.py: INSTALLED_APPS = (...'rest_framework') Now we're ready to start testing. Writing tests for API endpoints While there's no such thing as browser-based testing for an external API, it is important to write tests that cover its end-to-end processing. We need to be able to send in requests like the ones we've documented and confirm that we receive the responses our documentation promises. Django REST Framework (DRF from here on out) provides tools to help write tests for the application functionality it provides. We'll use rest_framework.tests.APITestCase to write functional tests. Let's kick off with the list of albums. Convert albums/tests.py to a package, and add a test_api.py file. Then add the following: from rest_framework.test import APITestCasefrom albums.models import Albumclass AlbumAPITestCase(APITestCase):def setUp(self):self.kind_of_blue = Album.objects.create(name='Kind of Blue')self.a_love_supreme = Album.objects.create(name='A Love Supreme')def test_list_albums(self):"""Test that we can get a list of albums"""response = self.client.get('/api/albums/')self.assertEqual(response.status_code, 200)self.assertEqual(response.data[0]['name'],'A Love Supreme')self.assertEqual(response.data[1]['url'],'http://testserver/api/albums/1/') Since much of this is very similar to other tests that we've seen before, let's talk about the important differences: We import and subclass APITestCase, which makes self.client an instance of rest_framework.test.APIClient. Both of these subclass their respective django.test counterparts add a few niceties that help in testing APIs (none of which are showcased yet). We test response.data, which we expect to be a list of Albums. response.data will be a Python dict or list that corresponds to the JSON payload of the response. During the course of the test, APIClient (a subclass of Client) will use http://testserver as the protocol and hostname for the server, and our API should return a host-specific URI. Run this test, and we get the following: $ python manage.py test albums.tests.test_apiCreating test database for alias 'default'...F=====================================================================FAIL: test_list_albums (albums.tests.test_api.AlbumAPITestCase)Test that we can get a list of albums---------------------------------------------------------------------Traceback (most recent call last):File "/Users/kevin/dev/jmad-project/jmad/albums/tests/test_api.py",line 17, in test_list_albumsself.assertEqual(response.status_code, 200)AssertionError: 404 != 200---------------------------------------------------------------------Ran 1 test in 0.019sFAILED (failures=1) We're failing because we're getting a 404 Not Found instead of a 200 OK status code. Proper HTTP communication is important in any web application, but it really comes in to play when you're using AJAX. Most frontend libraries will properly classify responses as successful or erroneous based on the status code: making sure the code are on point will save your frontend developers friends a lot of headache. We're getting a 404 because we don't have a URL defined yet. Before we set up the route, let's add a quick unit test for routing. Update the test case with one new import and method: from django.core.urlresolvers import resolve...def test_album_list_route(self):"""Test that we've got routing set up for Albums"""route = resolve('/api/albums/')self.assertEqual(route.func.__name__, 'AlbumViewSet') Here, we're just confirming that the URL routes to the correct view. Run it: $ python manage.py testalbums.tests.test_api.AlbumAPITestCase.test_album_list_route...django.core.urlresolvers.Resolver404: {'path': 'api/albums/','tried': [[<RegexURLResolver <RegexURLPattern list> (admin:admin)^admin/>], [<RegexURLPattern solo_detail_view^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$>], [<RegexURLPattern None ^$>]]}---------------------------------------------------------------------Ran 1 test in 0.003sFAILED (errors=1) We get a Resolver404 error, which is expected since Django shouldn't return anything at that path. Now we're ready to set up our URLs. API routing with DRF's SimpleRouter Take a look at the documentation for routers at http://www.django-rest-framework.org/api-guide/routers/. They're a very clean way of setting up URLs for DRF-powered views. Update jmad/urls.py like so: ...from rest_framework import routersfrom albums.views import AlbumViewSetrouter = routers.SimpleRouter()router.register(r'albums', AlbumViewSet)urlpatterns = [# Adminurl(r'^admin/', include(admin.site.urls)),# APIurl(r'^api/', include(router.urls)),# Appsurl(r'^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$','solos.views.solo_detail',name='solo_detail_view'),url(r'^$', 'solos.views.index'),] Here's what we changed: We created an instance of SimpleRouter and used the register method to set up a route. The register method has two required arguments: a prefix to build the route methods from, and something called a viewset. Here we've supplied a non-existent class AlbumViewSet, which we'll come back to later. We've added a few comments to break up our urls.py, which was starting to look a little like a rat's nest. The actual API URLs are registered under the '^api/' path using Django's include function. Run the URL test again, and we'll get ImportError for AlbumViewSet. Let's add a stub to albums/views.py: class AlbumViewSet():pass Run the test now, and we'll start to see some specific DRF error messages to help us build out our view: $ python manage.py testalbums.tests.test_api.AlbumAPITestCase.test_album_list_routeCreating test database for alias 'default'...F...File "/Users/kevin/.virtualenvs/jmad/lib/python3.4/sitepackages/rest_framework/routers.py", line 60, in registerbase_name = self.get_default_base_name(viewset)File "/Users/kevin/.virtualenvs/jmad/lib/python3.4/sitepackages/rest_framework/routers.py", line 135, inget_default_base_nameassert queryset is not None, ''base_name' argument not specified,and could ' AssertionError: 'base_name' argument not specified, and could notautomatically determine the name from the viewset, as it does nothave a '.queryset' attribute. After a fairly lengthy output, the test runner tells us that it was unable to get base_name for the URL, as we did not specify the base_name in the register method, and it couldn't guess the name because the viewset (AlbumViewSet) did not have a queryset attribute. In the router documentation, we came across the optional base_name argument for register (as well as the exact wording of this error). You can use that argument to control the name your URL gets. However, let's keep letting DRF do its default behavior. We haven't read the documentation for viewsets yet, but we know that a regular Django class-based view expects a queryset parameter. Let's stick one on AlbumViewSet and see what happens: from .models import Albumclass AlbumViewSet():queryset = Album.objects.all() Run the test again, and we get: django.core.urlresolvers.Resolver404: {'path': 'api/albums/','tried': [[<RegexURLResolver <RegexURLPattern list> (admin:admin)^admin/>], [<RegexURLPattern solo_detail_view^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$>], [<RegexURLPattern None ^$>]]}---------------------------------------------------------------------Ran 1 test in 0.011sFAILED (errors=1) Huh? Another 404 is a step backwards. What did we do wrong? Maybe it's time to figure out what a viewset really is. Summary In this article, we covered basic API design and testing patterns, including the importance of documentation when developing an API. In doing so, we took a deep dive into Django REST Framework and the utilities and testing tools available in it. Resources for Article: Further resources on this subject: Test-driven API Development with Django REST Framework [Article] Adding a developer with Django forms [Article] Code Style in Django [Article]
Read more
  • 0
  • 0
  • 2211

article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 18756
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-goldengate-12c-overview
Packt
10 Aug 2015
21 min read
Save for later

Oracle GoldenGate 12c — An Overview

Packt
10 Aug 2015
21 min read
In this article by John P Jeffries, author of the book Oracle GoldenGate 12c Implementer's Guide, he provides an introduction to Oracle GoldenGate by describing the key components, processes, and considerations required to build and implement a GoldenGate solution. John tells you how to address some of the issues that influence the decision-making process when you design a GoldenGate solution. He focuses on the additional configuration options available in Oracle GoldenGate 12c (For more resources related to this topic, see here.) 12c new features Oracle has provided some exciting new features in their 12c version of GoldenGate, some of which we have already touched upon. Following the official desupport of Oracle Streams in Oracle Database 12c, Oracle has essentially migrated some of the key features to its strategic product. You will find that GoldenGate now has a tighter integration with the Oracle database, enabling enhanced functionality. Let's explore some of the new features available in Oracle GoldenGate 12c. Integrated capture Integrated capture has been available since Oracle GoldenGate 11gR2 with Oracle Database 11g (11.2.0.3). Originally decoupled from the database, GoldenGate's new architecture provides the option to integrate its Extract process(es) with the Oracle database. This enables GoldenGate to access the database's data dictionary and undo tablespace, providing replication support for advanced features and data types. Oracle GoldenGate 12c still supports the original Extract configuration, known as Classic Capture. Integrated Replicat Integrated Replicat is a new feature in Oracle GoldenGate 12c for the delivery of data to Oracle Database 11g (11.2.0.4) or 12c. The performance enhancement provides better scalability and load balancing that leverages the database parallel apply servers for automatic, dependency-aware parallel Replicat processes. With Integrated Replicat, there is no need for users to manually split the delivery process into multiple threads and manage multiple parameter files. GoldenGate now uses a lightweight streaming API to prepare, coordinate, and apply the data to the downstream database. Oracle GoldenGate 12c still supports the original Replicat configuration, known as Classic Delivery. Downstream capture Downstream capture was one of my favorite Oracle Stream features. It allows for a combined in-memory capture and apply process that achieves very low latency even in heavy data load situations. Like Streams, GoldenGate builds on this feature by employing a real-time downstream capture process. This method uses Oracle Data Guard's log transportation mechanism, which writes changed data to standby redo logs. It provides a best-of-both-worlds approach, enabling a real-time mine configuration that falls back to archive log mining when the apply process cannot keep up. In addition, the real-time mine process is re-enabled automatically when the data throughput is less. Installation One of the major changes in Oracle GoldenGate 12c is the installation method. Like other Oracle products, Oracle GoldenGate 12c is now installed using the Java-based Oracle Universal Installer (OUI) in either the interactive or silent mode. OUI reads the Oracle Inventory on your system to discover existing installations (Oracle Homes), allowing you to install, deinstall, or clone software products. Upgrading to 12c Whether you wish to upgrade your current GoldenGate installation from Oracle GoldenGate 11g Release 2 or from an earlier version, the steps are the same. Simply stop all the GoldenGate running processes on your database server, backup the GoldenGate home, and then use OUI to perform the fresh installation. It is important to note, however, while restarting replication, ensure the capture process begins from the point at which it was gracefully stopped to guarantee against lost synchronization data. Multitenant database replication As the version suggests, Oracle GoldenGate 12c now supports data replication for Oracle Database 12c. Those familiar with the 12c database features will be aware of the multitenant container database (CDB) that provides database consolidation. Each CDB consists of a root container and one or more pluggable databases (PDB). The PDB can contain multiple schemas and objects, just like a conventional database that GoldenGate replicates data to and from. The GoldenGate Extract process pulls data from multiple PDBs or containers in the source, combining the changed data into a single trail file. Replicat, however, splits the data into multiple process groups in order to apply the changes to a target PDB. Coordinated Delivery The Coordinated Delivery option applies to the GoldenGate Replicat process when configured in the classic mode. It provides a performance gain by automatically splitting the delivered data from a remote trail file into multiple threads that are then applied to the target database in parallel. GoldenGate manages the coordination across selected events that require ordering, including DDL, primary key updates, event marker interface (EMI), and SQLEXEC. Coordinated Delivery can be used with both Oracle (from version 11.2.0.4) and non-Oracle databases. Event-based processing In GoldenGate 12c, event-based processing has been enhanced to allow specific events to be captured and acted upon automatically through an EMI. SQLEXEC provides the API to EMI, enabling programmatic execution of tasks following an event. Now it is possible, for example, to detect the start of a batch job or large transaction, trap the SQL statement(s), and ignore the subsequent multiple change records until the end of the source system transaction. The original DML can then be replayed on the target database as one transaction. This is a major step forward in the performance tuning for data replication. Enhanced security Recent versions of GoldenGate have included security features such as the encryption of passwords and data. Oracle GoldenGate 12c now supports a credential store, better known as an Oracle wallet, that securely stores an alias associated with a username and password. The alias is then referenced in the GoldenGate parameter files rather than the actual username and password. Conflict Detection and Resolution In earlier versions of GoldenGate, Conflict Detection and Resolution (CDR) has been somewhat lightweight and was not readily available out of the box. Although available in Oracle Streams, the GoldenGate administrator would have to programmatically resolve any data conflict in the replication process using GoldenGate built-in tools. In the 12c version, the feature has emerged as an easily configurable option through Extract and Replicat parameters. Dynamic Rollback Selective data back out of applied transactions is now possible using the Dynamic Rollback feature. The feature operates at table and record-level and supports point-in-time recovery. This potentially eliminates the need for a full database restore, following data corruption, erroneous deletions, or perhaps the removal of test data, thus avoiding hours of system downtime. Streams to GoldenGate migration Oracle Streams users can now migrate their data replication solution to Oracle GoldenGate 12c using a purpose-built utility. This is a welcomed feature given that Streams is no longer supported in Oracle Database 12c. The Streams2ogg tool auto generates Oracle GoldenGate configuration files that greatly simplify the effort required in the migration process. Performance In today's demand for real-time access to real-time data, high performance is the key. For example, businesses will no longer wait for information to arrive on their DSS to make decisions and users will expect the latest information to be available in the public cloud. Data has value and must be delivered in real time to meet the demand. So, how long does it take to replicate a transaction from the source database to its target? This is known as end-to-end latency, which typically has a threshold that must not be breeched in order to satisfy a predefined Service Level Agreement (SLA). GoldenGate refers to latency as lag, which can be measured at different intervals in the replication process. They are as follows: Source to Extract: The time taken for a record to be processed by the Extract compared to the commit timestamp on the database Replicat to target: The time taken for the last record to be processed by the Replicat process compared to the record creation time in the trail file A well-designed system may still encounter spikes in the latency, but it should never be continuous or growing. Peaks are typically caused by load on the source database system, where the latency increases with the number of transactions per second. Lag should be measured as an average over a specified period. Trying to tune GoldenGate when the design is poor is a difficult situation to be in. For the system to perform well, you may need to revisit the design. Availability Another important NFR is availability. Normally quoted as a percentage, the system must be available for the specified length of time. For example, NFR of 99.9 percent availability equates to a downtime of 8.76 hours in a year, which sounds quite a lot, especially if it were to occur all at once. Oracle's maximum availability architecture (MAA) offers enhanced availability through products such as Real Application Clusters (RAC) and Active Data Guard (ADG). However, as we previously described, the network plays a major role in data replication. The NFR relates to the whole system, so you need to be sure your design covers redundancy for all components. Event-based processing It is important in any data replication environment to capture and manage events, such as trail records containing specific data or operations or maybe the occurrence of a certain error. These are known as Event Markers. GoldenGate provides a mechanism to perform an action on a given event or condition. These are known as Event Actions and are triggered by Event Records. If you are familiar with Oracle Streams, Event Actions are like rules. The Event Marker System GoldenGate's Event Marker System, also known as event marker interface (EMI), allows custom DML-driven processing on an event. This comprises of an Event Record to trigger a given action. An Event Record can be either a trail record that satisfies a condition evaluated by a WHERE or FILTER clause or a record written to an event table that enables an action to occur. Typical actions are writing status information, reporting errors, ignoring certain records in a trail, invoking a shell script, or performing an administrative task. The following Replicat code describes the process of capturing an event and performing an action by logging DELETE operations made against the CREDITCARD_ACCOUNTS table using the EVENTACTIONS parameter: MAP SRC.CREDITCARD_ACCOUNTS, TARGET TGT.CREDITCARD_ACCOUNTS_DIM;TABLE SRC.CREDITCARD_ACCOUNTS, &FILTER (@GETENV ('GGHEADER', 'OPTYPE') = 'DELETE'), &EVENTACTIONS (LOG INFO); By default, all logged information is written to the process group report file, the GoldenGate error log, and the system messages file. On Linux, this is the /var/log/messages file. Note that the TABLE parameter is also used in the Replicat's parameter file. This is a means of triggering an Event Action to be executed by the Replicat when it encounters an Event Marker. The following code shows the use of the IGNORE option that prevents certain records from being extracted or replicated, which is particularly useful to filter out system type data. When used with the TRANSACTION option, the whole transaction and not just the Event Record is ignored: TABLE SRC.CREDITCARD_ACCOUNTS, &FILTER (@GETENV ('GGHEADER', 'OPTYPE') = 'DELETE'), &EVENTACTIONS (IGNORE TRANSACTION); The preceding code extends the previous code by stopping the Event Record itself from being replicated. Using Event Actions to improve batch performance All replication technologies typically suffer from one flaw that is the way in which the data is replicated. Consider a table that is populated with a million rows as part of a batch process. This may be a bulk insert operation that Oracle completes on the source database as one transaction. However, Oracle will write each change to its redo logs as Logical Change Records (LCRs). GoldenGate will subsequently mine the logs, write the LCRs to a remote trail, convert each one back to DML, and apply them to the target database, one row at a time. The single source transaction becomes one million transactions, which causes a huge performance overhead. To overcome this issue, we can use Event Actions to: Detect the DML statement (INSERT INTO TABLE SELECT ..) Ignore the data resulting from the SELECT part of the statement Replicate just the DML statement as an Event Record Execute just the DML statement on the target database The solution requires a statement table on both source and target databases to trigger the event. Also, both databases must be perfectly synchronized to avoid data integrity issues. User tokens User tokens are GoldenGate environment variables that are captured and stored in the trail record for replication. They can be accessed via the @GETENV function. We can use token data in column maps, stored procedures called by SQLEXEC, and, of course, in macros. Using user tokens to populate a heartbeat table A vast array of user tokens exist in GoldenGate. Let's start by looking at a common method of replicating system information to populate a heartbeat table that can be used to monitor performance. We can use the TOKENS option of the Extract TABLE parameter to define a user token and associate it with the GoldenGate environment data. The following Extract configuration code shows the token declarations for the heartbeat table: TABLE GGADMIN.GG_HB_OUT, &TOKENS (EXTGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), &EXTTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &EXTLAG = @GETENV ("LAG","SEC"), &EXTSTAT_TOTAL = @GETENV ("DELTASTATS","DML"), &), FILTER (@STREQ (EXTGROUP, @GETENV("GGENVIRONMENT","GROUPNAME"))); For the data pump, the example Extract configuration is shown here: TABLE GGADMIN.GG_HB_OUT, &TOKENS (PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), &PMPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &PMPLAG = @GETENV ("LAG","SEC")); Also, for the Replicat, the following configuration populates the heartbeat table on the target database with the token data derived from Extract, data pump, and Replicat, containing system details and replication lag: MAP GGADMIN.GG_HB_OUT_SRC, TARGET GGADMIN.GG_HB_IN_TGT, &KEYCOLS (DB_NAME, EXTGROUP, PMPGROUP, REPGROUP), &INSERTMISSINGUPDATES, &COLMAP (USEDEFAULTS, &ID = 0, &SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &EXTGROUP = @TOKEN ("EXTGROUP"), &EXTTIME = @TOKEN ("EXTTIME"), &PMPGROUP = @TOKEN ("PMPGROUP"), &PMPTIME = @TOKEN ("PMPTIME"), &REPGROUP = @TOKEN ("REPGROUP"), &REPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &EXTLAG = @TOKEN ("EXTLAG"), &PMPLAG = @TOKEN ("PMPLAG"), &REPLAG = @GETENV ("LAG","SEC"), &EXTSTAT_TOTAL = @TOKEN ("EXTSTAT_TOTAL")); As in the heartbeat table example, the defined user tokens can be called in a MAP statement using the @TOKEN function. The SOURCE_COMMIT and LAG metrics are self-explained. However, EXTSTAT_TOTAL, which is derived from DELTASTATS, is particularly useful to measure the load on the source system when you evaluate latency peaks. For applications, user tokens are useful to audit data and trap exceptions within the replicated data stream. Common user tokens are shown in the following code that replicates the token data to five columns of an audit table: MAP SRC.AUDIT_LOG, TARGET TGT.AUDIT_LOG, &COLMAP (USEDEFAULTS, &OSUSER = @TOKEN ("TKN_OSUSER"), &DBNAME = @TOKEN ("TKN_DBNAME"), &HOSTNAME = @TOKEN ("TKN_HOSTNAME"), &TIMESTAMP = @TOKEN ("TKN_COMMITTIME"), &BEFOREAFTERINDICATOR = @TOKEN ("TKN_ BEFOREAFTERINDICATOR"); The BEFOREAFTERINDICATOR environment variable is particularly useful to provide a status flag in order to check whether the data was from a Before or After image of an UPDATE or DELETE operation. By default, GoldenGate provides After images. To enable a Before image extraction, the GETUPDATEBEFORES Extract parameter must be used on the source database. Using logic in the data replication GoldenGate has a number of functions that enable the administrator to program logic in the Extract and Replicat process configuration. These provide generic functions found in the IF and CASE programming languages. In addition, the @COLTEST function enables conditional calculations by testing for one or more column conditions. This is typically used with the @IF function, as shown in the following code: MAP SRC.CREDITCARD_PAYMENTS, TARGET TGT.CREDITCARD_PAYMENTS_FACT,&COLMAP (USEDEFAULTS, &AMOUNT = @IF(@COLTEST(AMOUNT, MISSING, INVALID), 0, AMOUNT)); Here, the @COLTEST function tests the AMOUNT column in the source data to check whether it is MISSING or INVALID. The @IF function returns 0 if @COLTEST returns TRUE and returns the value of AMOUNT if FALSE. The target AMOUNT column is therefore set to 0 when the equivalent source is found to be missing or invalid; otherwise, a direct mapping occurs. The @CASE function tests a list of values for a match and then returns a specified value. If no match is found, @CASE will return a default value. There is no limit to the number of cases to test; however, if the list is very large, a database lookup may be more appropriate. The following code shows the simplicity of the @CASE statement. Here, the country name is returned from the country code: MAP SRC.CREDITCARD_STATEMENT, TARGET TGT.CREDITCARD_STATEMENT_DIM,&COLMAP (USEDEFAULTS, &COUNTRY = @CASE(COUNTRY_CODE, "UK", "United Kingdom", "USA","United States of America")); Other GoldenGate functions: @EVAL and @VALONEOF exist that perform tests. Similar to @CASE, @VALONEOF compares a column or string to a list of values. The difference being it evaluates more than one value against a single column or string. When the following code is used with @IF, it returns "EUROPE" when TRUE and "UNKNOWN" when FALSE: MAP SRC.CREDITCARD_STATEMENT, TARGET TGT.CREDITCARD_STATEMENT_DIM,&COLMAP (USEDEFAULTS, &REGION = @IF(@VALONEOF(COUNTRY_CODE, "UK","E", "D"),"EUROPE","UNKNOWN")); The @EVAL function evaluates a list of conditions and returns a specified value. Optionally, if none are satisfied, it returns a default value. There is no limit to the number of evaluations you can list. However, it is best to list the most common evaluations at the beginning to enhance performance. The following code includes the BEFORE option that compares the before value of the replicated source column to the current value of the target column. Depending on the evaluation, @EVAL will return "PAID MORE", "PAID LESS", or "PAID SAME": MAP SRC.CREDITCARD_ PAYMENTS, TARGET TGT.CREDITCARD_PAYMENTS, &COLMAP (USEDEFAULTS, &STATUS = @EVAL(AMOUNT < BEFORE.AMOUNT, "PAID LESS", AMOUNT > BEFORE.AMOUNT, "PAID MORE", AMOUNT = BEFORE.AMOUNT, "PAID SAME")); The BEFORE option can be used with other GoldenGate functions, including the WHERE and FILTER clauses. However, for the Before image to be written to the trail and to be available, the GETUPDATEBEFORES parameter must be enabled in the source database's Extract parameter file or the target database's Replicat parameter file, but not both. The GETUPDATEBEFORES parameter can be set globally for all tables defined in the Extract or individually per table using GETUPDATEBEFORES and IGNOREUPDATEBEFORES, as seen in the following code: EXTRACT EOLTP01USERIDALIAS srcdb DOMAIN adminSOURCECATALOG PDB1EXTTRAIL ./dirdat/aaGETAPPLOPSIGNOREREPLICATESGETUPDATEBEFORESTABLE SRC.CHECK_PAYMENTS;IGNOREUPDATEBEFORESTABLE SRC.CHECK_PAYMENTS_STATUS;TABLE SRC.CREDITCARD_ACCOUNTS;TABLE SRC.CREDITCARD_PAYMENTS; Tracing processes to find wait events If you have worked with Oracle software, particularly in the performance tuning space, you will be familiar with tracing. Tracing enables additional information to be gathered from a given process or function to diagnose performance problems or even bugs. One example is the SQL trace that can be enabled at a database session or the system level to provide key information, such as; wait events, parse, fetch, and execute times. Oracle GoldenGate 12c offers a similar tracing mechanism through its trace and trace2 options of the SEND GGSCI command. This is like the session-level SQL trace. Also, in a similar fashion to performing a database system trace, tracing can be enabled in the GoldenGate process parameter files that make it permanent until the Extract or Replicat is stopped. trace provides processing information, whereas trace2 identifies the processes with wait events. The following commands show tracing being dynamically enabled for 2 minutes on a running Replicat process: GGSCI (db12server02) 1> send ROLAP01 trace2 ./dirrpt/ROLAP01.trc Wait for 2 minutes, then turn tracing off: GGSCI (db12server02) 2> send ROLAP01 trace2 offGGSCI (db12server02) 3> exit To view the contents of the Replicat trace file, we can execute the following command. In the case of a coordinated Replicat, the trace file will contain information from all of its threads: $ view dirrpt/ROLAP01.trcstatistics between 2015-08-08 Wed HKT 11:55:27 and 2015-08-08 Wed HKT11:57:28RPT_PROD_Ol.LIMIT_TP_RESP : n=2 : op=Insert; total=3; avg=1.5000;max=3msecRPT_PROD_01.SUP_POOL_SMRY_HIST : n=1 : op=Insert; total=2; avg=2.0000;max=2msecRPT_PROD_01.EVENTS : n=1 : op=Insert; total=2; avg=2.0000; max=2msecRPT_PROD_01.DOC_SHIP_DTLS : n=17880 : op=FieldComp; total=22003;avg=1.2306; max=42msecRPT_PROD_01.BUY_POOL_SMRY_HIST : n=1 : op=Insert; total=2; avg=2.0000;max=2msecRPT_PROD_01.LIMIT_TP_LOG : n=2 : op-Insert; total=2; avg=1.0000;max=2msecRPT_PROD_01.POOL_SMRY : n=1 : op=FieldComp; total=2; avg=2.0000;max=2msec..===============================================summary==============Delete : n=2; total=2; avg=1.00;Insert : n=78; total=356; avg=4.56;FieldComp : n=85728; total=123018; avg=1.43;total_op_num=85808 : total_op_time=123376 ms : total_avg_time=1.44ms/optotal commit number=1 The trace file provides the following information: The table name The operation type (FieldComp is for a compressed field) The number of operations The average wait The maximum wait Summary Armed with the preceding information, we can quickly see what operations against which tables are taking the longest time. Exception handling Oracle GoldenGate 12c now supports Conflict Detection and Resolution (CDR). However, out-of-the-box, GoldenGate takes a catch all approach to exception handling. For example, by default, should any operational failure occur, a Replicat process will ABEND and roll back the transaction to the last known checkpoint. This may not be ideal in a production environment. The HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters can be used to control whether or not a Replicat process tries to resolve the duplicate record error and the missing record error. The way to determine what error occurred and on which Replicat is to create an exceptions handler. Exception handling differs from CDR by trapping and reporting Oracle errors suffered by the data replication (DML and DDL). On the other hand, CDR detects and resolves inconsistencies in the replicated data, such as mismatches with before and after images. Exceptions can always be trapped by the Oracle error they produce. GoldenGate provides an exception handler parameter called REPERROR that allows the Replicat to continue processing data after a predefined error. For example, we can include the following configuration in our Replicat parameter file to ignore ORA-00001 "unique constraint (%s.%s) violated": REPERROR (DEFAULT, EXCEPTION)REPERROR (DEFAULT2, ABEND)REPERROR (-1, EXCEPTION) Cloud computing Cloud computing has grown enormously in the recent years. Oracle has named its latest version of products: 12c, the c standing for Cloud of course. The architecture of Oracle 12c Database allows a multitenant container database to support multiple pluggable databases—a key feature of cloud computing—rather than implement the inefficient schema consolidation, typical of the previous Oracle database version architecture, which is known to cause contention on shared resources during high load. The Oracle 12c architecture supports a database consolidation approach through its efficient memory management and dedicated background processes. Online computer companies such as Amazon have leveraged the cloud concept by offering Relational Database Services (RDS), which is becoming very popular for its speed of readiness, support, and low cost. The cloud environments are often huge, containing hundreds of servers, petabytes of storage, terabytes of memory, and countless CPU cores. The cloud has to support multiple applications in a multi-tiered, shared environment, often through virtualization technologies, where storage and CPUs are typically the driving factors for cost-effective options. Customers choose their hardware footprint that best suits their budget and system requirements, commonly known as Platform as a Service (PaaS). Cloud computing is an extension to grid computing that offers both public and private clouds. GoldenGate and Big Data It is increasingly evident that organizations need to quickly access, analyze, and report on their data across their Enterprise in order to be agile in a competitive market. Data is becoming more of an asset to companies; it adds value to a business, but may be stored in any number of current and legacy systems, making it difficult to realize its full potential. Known as big data, it has until recently been nearly impossible to perform real-time business analysis on the combined data from multiple sources. Nowadays, the ability to access all transactional data with low latency is essential. With the introduction of products such as Apache Hadoop, integration of structured data from an RDBMS, including semi-structured and unstructured data, offers a common playing field to support business intelligence. When coupled with ODI, GoldenGate for big data provides real-time delivery to a suite of Apache products, such as Flume, HDFS, Hive, and Hbase, to support big data analytics. Summary In this article, we have learned an introduction to Oracle GoldenGate by describing the key components, processes, and considerations required to build and implement a GoldenGate solution. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Oracle GoldenGate- Advanced Administration Tasks - I [Article] Oracle B2B Overview [Article]
Read more
  • 0
  • 0
  • 6636

article-image-bayesian-network-fundamentals
Packt
10 Aug 2015
25 min read
Save for later

Bayesian Network Fundamentals

Packt
10 Aug 2015
25 min read
In this article by Ankur Ankan and Abinash Panda, the authors of Mastering Probabilistic Graphical Models Using Python, we'll cover the basics of random variables, probability theory, and graph theory. We'll also see the Bayesian models and the independencies in Bayesian models. A graphical model is essentially a way of representing joint probability distribution over a set of random variables in a compact and intuitive form. There are two main types of graphical models, namely directed and undirected. We generally use a directed model, also known as a Bayesian network, when we mostly have a causal relationship between the random variables. Graphical models also give us tools to operate on these models to find conditional and marginal probabilities of variables, while keeping the computational complexity under control. (For more resources related to this topic, see here.) Probability theory To understand the concepts of probability theory, let's start with a real-life situation. Let's assume we want to go for an outing on a weekend. There are a lot of things to consider before going: the weather conditions, the traffic, and many other factors. If the weather is windy or cloudy, then it is probably not a good idea to go out. However, even if we have information about the weather, we cannot be completely sure whether to go or not; hence we have used the words probably or maybe. Similarly, if it is windy in the morning (or at the time we took our observations), we cannot be completely certain that it will be windy throughout the day. The same holds for cloudy weather; it might turn out to be a very pleasant day. Further, we are not completely certain of our observations. There are always some limitations in our ability to observe; sometimes, these observations could even be noisy. In short, uncertainty or randomness is the innate nature of the world. The probability theory provides us the necessary tools to study this uncertainty. It helps us look into options that are unlikely yet probable. Random variable Probability deals with the study of events. From our intuition, we can say that some events are more likely than others, but to quantify the likeliness of a particular event, we require the probability theory. It helps us predict the future by assessing how likely the outcomes are. Before going deeper into the probability theory, let's first get acquainted with the basic terminologies and definitions of the probability theory. A random variable is a way of representing an attribute of the outcome. Formally, a random variable X is a function that maps a possible set of outcomes ? to some set E, which is represented as follows: X : ? ? E As an example, let us consider the outing example again. To decide whether to go or not, we may consider the skycover (to check whether it is cloudy or not). Skycover is an attribute of the day. Mathematically, the random variable skycover (X) is interpreted as a function, which maps the day (?) to its skycover values (E). So when we say the event X = 40.1, it represents the set of all the days {?} such that  , where  is the mapping function. Formally speaking, . Random variables can either be discrete or continuous. A discrete random variable can only take a finite number of values. For example, the random variable representing the outcome of a coin toss can take only two values, heads or tails; and hence, it is discrete. Whereas, a continuous random variable can take infinite number of values. For example, a variable representing the speed of a car can take any number values. For any event whose outcome is represented by some random variable (X), we can assign some value to each of the possible outcomes of X, which represents how probable it is. This is known as the probability distribution of the random variable and is denoted by P(X). For example, consider a set of restaurants. Let X be a random variable representing the quality of food in a restaurant. It can take up a set of values, such as {good, bad, average}. P(X), represents the probability distribution of X, that is, if P(X = good) = 0.3, P(X = average) = 0.5, and P(X = bad) = 0.2. This means there is 30 percent chance of a restaurant serving good food, 50 percent chance of it serving average food, and 20 percent chance of it serving bad food. Independence and conditional independence In most of the situations, we are rather more interested in looking at multiple attributes at the same time. For example, to choose a restaurant, we won't only be looking just at the quality of food; we might also want to look at other attributes, such as the cost, location, size, and so on. We can have a probability distribution over a combination of these attributes as well. This type of distribution is known as joint probability distribution. Going back to our restaurant example, let the random variable for the quality of food be represented by Q, and the cost of food be represented by C. Q can have three categorical values, namely {good, average, bad}, and C can have the values {high, low}. So, the joint distribution for P(Q, C) would have probability values for all the combinations of states of Q and C. P(Q = good, C = high) will represent the probability of a pricey restaurant with good quality food, while P(Q = bad, C = low) will represent the probability of a restaurant that is less expensive with bad quality food. Let us consider another random variable representing an attribute of a restaurant, its location L. The cost of food in a restaurant is not only affected by the quality of food but also the location (generally, a restaurant located in a very good location would be more costly as compared to a restaurant present in a not-very-good location). From our intuition, we can say that the probability of a costly restaurant located at a very good location in a city would be different (generally, more) than simply the probability of a costly restaurant, or the probability of a cheap restaurant located at a prime location of city is different (generally less) than simply probability of a cheap restaurant. Formally speaking, P(C = high | L = good) will be different from P(C = high) and P(C = low | L = good) will be different from P(C = low). This indicates that the random variables C and L are not independent of each other. These attributes or random variables need not always be dependent on each other. For example, the quality of food doesn't depend upon the location of restaurant. So, P(Q = good | L = good) or P(Q = good | L = bad)would be the same as P(Q = good), that is, our estimate of the quality of food of the restaurant will not change even if we have knowledge of its location. Hence, these random variables are independent of each other. In general, random variables  can be considered as independent of each other, if: They may also be considered independent if: We can easily derive this conclusion. We know the following from the chain rule of probability: P(X, Y) = P(X) P(Y | X) If Y is independent of X, that is, if X | Y, then P(Y | X) = P(Y). Then: P(X, Y) = P(X) P(Y) Extending this result on multiple variables, we can easily get to the conclusion that a set of random variables are independent of each other, if their joint probability distribution is equal to the product of probabilities of each individual random variable. Sometimes, the variables might not be independent of each other. To make this clearer, let's add another random variable, that is, the number of people visiting the restaurant N. Let's assume that, from our experience we know the number of people visiting only depends on the cost of food at the restaurant and its location (generally, lesser number of people visit costly restaurants). Does the quality of food Q affect the number of people visiting the restaurant? To answer this question, let's look into the random variable affecting N, cost C, and location L. As C is directly affected by Q, we can conclude that Q affects N. However, let's consider a situation when we know that the restaurant is costly, that is, C = high and let's ask the same question, "does the quality of food affect the number of people coming to the restaurant?". The answer is no. The number of people coming only depends on the price and location, so if we know that the cost is high, then we can easily conclude that fewer people will visit, irrespective of the quality of food. Hence,  . This type of independence is called conditional independence. Installing tools Let's now see some coding examples using pgmpy, to represent joint distributions and independencies. Here, we will mostly work with IPython and pgmpy (and a few other libraries) for coding examples. So, before moving ahead, let's get a basic introduction to these. IPython IPython is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, which offers enhanced introspection, rich media, additional shell syntax, tab completion, and a rich history. IPython provides the following features: Powerful interactive shells (terminal and Qt-based) A browser-based notebook with support for code, text, mathematical expressions, inline plots, and other rich media Support for interactive data visualization and use of GUI toolkits Flexible and embeddable interpreters to load into one's own projects Easy-to-use and high performance tools for parallel computing You can install IPython using the following command: >>> pip3 install ipython To start the IPython command shell, you can simply type ipython3 in the terminal. For more installation instructions, you can visit http://ipython.org/install.html. pgmpy pgmpy is a Python library to work with Probabilistic Graphical models. As it's currently not on PyPi, we will need to build it manually. You can get the source code from the Git repository using the following command: >>> git clone https://github.com/pgmpy/pgmpy Now cd into the cloned directory switch branch for version used and build it with the following code: >>> cd pgmpy >>> git checkout book/v0.1 >>> sudo python3 setup.py install For more installation instructions, you can visit http://pgmpy.org/install.html. With both IPython and pgmpy installed, you should now be able to run the examples. Representing independencies using pgmpy To represent independencies, pgmpy has two classes, namely IndependenceAssertion and Independencies. The IndependenceAssertion class is used to represent individual assertions of the form of  or  . Let's see some code to represent assertions: # Firstly we need to import IndependenceAssertion In [1]: from pgmpy.independencies import IndependenceAssertion # Each assertion is in the form of [X, Y, Z] meaning X is # independent of Y given Z. In [2]: assertion1 = IndependenceAssertion('X', 'Y') In [3]: assertion1 Out[3]: (X _|_ Y) Here, assertion1 represents that the variable X is independent of the variable Y. To represent conditional assertions, we just need to add a third argument to IndependenceAssertion: In [4]: assertion2 = IndependenceAssertion('X', 'Y', 'Z') In [5]: assertion2 Out [5]: (X _|_ Y | Z) In the preceding example, assertion2 represents . IndependenceAssertion also allows us to represent assertions in the form of  . To do this, we just need to pass a list of random variables as arguments: In [4]: assertion2 = IndependenceAssertion('X', 'Y', 'Z') In [5]: assertion2 Out[5]: (X _|_ Y | Z) Moving on to the Independencies class, an Independencies object is used to represent a set of assertions. Often, in the case of Bayesian or Markov networks, we have more than one assertion corresponding to a given model, and to represent these independence assertions for the models, we generally use the Independencies object. Let's take a few examples: In [8]: from pgmpy.independencies import Independencies # There are multiple ways to create an Independencies object, we # could either initialize an empty object or initialize with some # assertions.   In [9]: independencies = Independencies() # Empty object In [10]: independencies.get_assertions() Out[10]: []   In [11]: independencies.add_assertions(assertion1, assertion2) In [12]: independencies.get_assertions() Out[12]: [(X _|_ Y), (X _|_ Y | Z)] We can also directly initialize Independencies in these two ways: In [13]: independencies = Independencies(assertion1, assertion2) In [14]: independencies = Independencies(['X', 'Y'],                                          ['A', 'B', 'C']) In [15]: independencies.get_assertions() Out[15]: [(X _|_ Y), (A _|_ B | C)] Representing joint probability distributions using pgmpy We can also represent joint probability distributions using pgmpy's JointProbabilityDistribution class. Let's say we want to represent the joint distribution over the outcomes of tossing two fair coins. So, in this case, the probability of all the possible outcomes would be 0.25, which is shown as follows: In [16]: from pgmpy.factors import JointProbabilityDistribution as         Joint In [17]: distribution = Joint(['coin1', 'coin2'],                              [2, 2],                              [0.25, 0.25, 0.25, 0.25]) Here, the first argument includes names of random variable. The second argument is a list of the number of states of each random variable. The third argument is a list of probability values, assuming that the first variable changes its states the slowest. So, the preceding distribution represents the following: In [18]: print(distribution) +--------------------------------------+ ¦ coin1   ¦ coin2   ¦   P(coin1,coin2) ¦ ¦---------+---------+------------------¦ ¦ coin1_0 ¦ coin2_0 ¦   0.2500         ¦ +---------+---------+------------------¦ ¦ coin1_0 ¦ coin2_1 ¦   0.2500         ¦ +---------+---------+------------------¦ ¦ coin1_1 ¦ coin2_0 ¦   0.2500         ¦ +---------+---------+------------------¦ ¦ coin1_1 ¦ coin2_1 ¦   0.2500         ¦ +--------------------------------------+ We can also conduct independence queries over these distributions in pgmpy: In [19]: distribution.check_independence('coin1', 'coin2') Out[20]: True Conditional probability distribution Let's take an example to understand conditional probability better. Let's say we have a bag containing three apples and five oranges, and we want to randomly take out fruits from the bag one at a time without replacing them. Also, the random variables  and  represent the outcomes in the first try and second try respectively. So, as there are three apples and five oranges in the bag initially,  and  . Now, let's say that in our first attempt we got an orange. Now, we cannot simply represent the probability of getting an apple or orange in our second attempt. The probabilities in the second attempt will depend on the outcome of our first attempt and therefore, we use conditional probability to represent such cases. Now, in the second attempt, we will have the following probabilities that depend on the outcome of our first try:  ,  ,  , and  . The Conditional Probability Distribution (CPD) of two variables  and  can be represented as  , representing the probability of  given  that is the probability of  after the event  has occurred and we know it's outcome. Similarly, we can have  representing the probability of  after having an observation for . The simplest representation of CPD is tabular CPD. In a tabular CPD, we construct a table containing all the possible combinations of different states of the random variables and the probabilities corresponding to these states. Let's consider the earlier restaurant example. Let's begin by representing the marginal distribution of the quality of food with Q. As we mentioned earlier, it can be categorized into three values {good, bad, average}. For example, P(Q) can be represented in the tabular form as follows: Quality P(Q) Good 0.3 Normal 0.5 Bad 0.2 Similarly, let's say P(L) is the probability distribution of the location of the restaurant. Its CPD can be represented as follows: Location P(L) Good 0.6 Bad 0.4 As the cost of restaurant C depends on both the quality of food Q and its location L, we will be considering P(C | Q, L), which is the conditional distribution of C, given Q and L: Location Good Bad Quality Good Normal Bad Good Normal Bad Cost             High 0.8 0.6 0.1 0.6 0.6 0.05 Low 0.2 0.4 0.9 0.4 0.4 0.95 Representing CPDs using pgmpy Let's first see how to represent the tabular CPD using pgmpy for variables that have no conditional variables: In [1]: from pgmpy.factors import TabularCPD   # For creating a TabularCPD object we need to pass three # arguments: the variable name, its cardinality that is the number # of states of the random variable and the probability value # corresponding each state. In [2]: quality = TabularCPD(variable='Quality',                              variable_card=3,                                values=[[0.3], [0.5], [0.2]]) In [3]: print(quality) +----------------------+ ¦ ['Quality', 0] ¦ 0.3 ¦ +----------------+-----¦ ¦ ['Quality', 1] ¦ 0.5 ¦ +----------------+-----¦ ¦ ['Quality', 2] ¦ 0.2 ¦ +----------------------+ In [4]: quality.variables Out[4]: OrderedDict([('Quality', [State(var='Quality', state=0),                                  State(var='Quality', state=1),                                  State(var='Quality', state=2)])])   In [5]: quality.cardinality Out[5]: array([3])   In [6]: quality.values Out[6]: array([0.3, 0.5, 0.2]) You can see here that the values of the CPD are a 1D array instead of a 2D array, which you passed as an argument. Actually, pgmpy internally stores the values of the TabularCPD as a flattened numpy array. In [7]: location = TabularCPD(variable='Location',                               variable_card=2,                              values=[[0.6], [0.4]]) In [8]: print(location) +-----------------------+ ¦ ['Location', 0] ¦ 0.6 ¦ +-----------------+-----¦ ¦ ['Location', 1] ¦ 0.4 ¦ +-----------------------+ However, when we have conditional variables, we also need to specify them and the cardinality of those variables. Let's define the TabularCPD for the cost variable: In [9]: cost = TabularCPD(                      variable='Cost',                      variable_card=2,                      values=[[0.8, 0.6, 0.1, 0.6, 0.6, 0.05],                              [0.2, 0.4, 0.9, 0.4, 0.4, 0.95]],                      evidence=['Q', 'L'],                      evidence_card=[3, 2]) Graph theory The second major framework for the study of probabilistic graphical models is graph theory. Graphs are the skeleton of PGMs, and are used to compactly encode the independence conditions of a probability distribution. Nodes and edges The foundation of graph theory was laid by Leonhard Euler when he solved the famous Seven Bridges of Konigsberg problem. The city of Konigsberg was set on both sides by the Pregel river and included two islands that were connected and maintained by seven bridges. The problem was to find a walk to exactly cross all the bridges once in a single walk. To visualize the problem, let's think of the graph in Fig 1.1: Fig 1.1: The Seven Bridges of Konigsberg graph Here, the nodes a, b, c, and d represent the land, and are known as vertices of the graph. The line segments ab, bc, cd, da, ab, and bc connecting the land parts are the bridges and are known as the edges of the graph. So, we can think of the problem of crossing all the bridges once in a single walk as tracing along all the edges of the graph without lifting our pencils. Formally, a graph G = (V, E) is an ordered pair of finite sets. The elements of the set V are known as the nodes or the vertices of the graph, and the elements of  are the edges or the arcs of the graph. The number of nodes or cardinality of G, denoted by |V|, are known as the order of the graph. Similarly, the number of edges denoted by |E| are known as the size of the graph. Here, we can see that the Konigsberg city graph shown in Fig 1.1 is of order 4 and size 7. In a graph, we say that two vertices, u, v ? V are adjacent if u, v ? E. In the City graph, all the four vertices are adjacent to each other because there is an edge for every possible combination of two vertices in the graph. Also, for a vertex v ? V, we define the neighbors set of v as  . In the City graph, we can see that b and d are neighbors of c. Similarly, a, b, and c are neighbors of d. We define an edge to be a self loop if the start vertex and the end vertex of the edge are the same. We can put it more formally as, any edge of the form (u, u), where u ? V is a self loop. Until now, we have been talking only about graphs whose edges don't have a direction associated with them, which means that the edge (u, v) is same as the edge (v, u). These types of graphs are known as undirected graphs. Similarly, we can think of a graph whose edges have a sense of direction associated with it. For these graphs, the edge set E would be a set of ordered pair of vertices. These types of graphs are known as directed graphs. In the case of a directed graph, we also define the indegree and outdegree for a vertex. For a vertex v ? V, we define its outdegree as the number of edges originating from the vertex v, that is,  . Similarly, the indegree is defined as the number of edges that end at the vertex v, that is,  . Walk, paths, and trails For a graph G = (V, E) and u,v ? V, we define a u - v walk as an alternating sequence of vertices and edges, starting with u and ending with v. In the City graph of Fig 1.1, we can have an example of a - d walk as . If there aren't multiple edges between the same vertices, then we simply represent a walk by a sequence of vertices. As in the case of the Butterfly graph shown in Fig 1.2, we can have a walk W : a, c, d, c, e: Fig 1.2: Butterfly graph—a undirected graph A walk with no repeated edges is known as a trail. For example, the walk  in the City graph is a trail. Also, a walk with no repeated vertices, except possibly the first and the last, is known as a path. For example, the walk  in the City graph is a path. Also, a graph is known as cyclic if there are one or more paths that start and end at the same node. Such paths are known as cycles. Similarly, if there are no cycles in a graph, it is known as an acyclic graph. Bayesian models In most of the real-life cases when we would be representing or modeling some event, we would be dealing with a lot of random variables. Even if we would consider all the random variables to be discrete, there would still be exponentially large number of values in the joint probability distribution. Dealing with such huge amount of data would be computationally expensive (and in some cases, even intractable), and would also require huge amount of memory to store the probability of each combination of states of these random variables. However, in most of the cases, many of these variables are marginally or conditionally independent of each other. By exploiting these independencies, we can reduce the number of values we need to store to represent the joint probability distribution. For instance, in the previous restaurant example, the joint probability distribution across the four random variables that we discussed (that is, quality of food Q, location of restaurant L, cost of food C, and the number of people visiting N) would require us to store 23 independent values. By the chain rule of probability, we know the following: P(Q, L, C, N) = P(Q) P(L|Q) P(C|L, Q) P(N|C, Q, L) Now, let us try to exploit the marginal and conditional independence between the variables, to make the representation more compact. Let's start by considering the independency between the location of the restaurant and quality of food over there. As both of these attributes are independent of each other, P(L|Q) would be the same as P(L). Therefore, we need to store only one parameter to represent it. From the conditional independence that we have seen earlier, we know that  . Thus, P(N|C, Q, L) would be the same as P(N|C, L); thus needing only four parameters. Therefore, we now need only (2 + 1 + 6 + 4 = 13) parameters to represent the whole distribution. We can conclude that exploiting independencies helps in the compact representation of joint probability distribution. This forms the basis for the Bayesian network. Representation A Bayesian network is represented by a Directed Acyclic Graph (DAG) and a set of Conditional Probability Distributions (CPD) in which: The nodes represent random variables The edges represent dependencies For each of the nodes, we have a CPD In our previous restaurant example, the nodes would be as follows: Quality of food (Q) Location (L) Cost of food (C) Number of people (N) As the cost of food was dependent on the quality of food (Q) and the location of the restaurant (L), there will be an edge each from Q ? C and L ? C. Similarly, as the number of people visiting the restaurant depends on the price of food and its location, there would be an edge each from L ? N and C ? N. The resulting structure of our Bayesian network is shown in Fig 1.3: Fig 1.3: Bayesian network for the restaurant example Factorization of a distribution over a network Each node in our Bayesian network for restaurants has a CPD associated to it. For example, the CPD for the cost of food in the restaurant is P(C|Q, L), as it only depends on the quality of food and location. For the number of people, it would be P(N|C, L) . So, we can generalize that the CPD associated with each node would be P(node|Par(node)) where Par(node) denotes the parents of the node in the graph. Assuming some probability values, we will finally get a network as shown in Fig 1.4: Fig 1.4: Bayesian network of restaurant along with CPDs Let us go back to the joint probability distribution of all these attributes of the restaurant again. Considering the independencies among variables, we concluded as follows: P(Q,C,L,N) = P(Q)P(L)P(C|Q, L)P(N|C, L) So now, looking into the Bayesian network (BN) for the restaurant, we can say that for any Bayesian network, the joint probability distribution  over all its random variables {X1,X2,...,Xn} can be represented as follows: This is known as the chain rule for Bayesian networks. Also, we say that a distribution P factorizes over a graph G, if P can be encoded as follows: Here, ParG(X) is the parent of X in the graph G. Summary In this article, we saw how we can represent a complex joint probability distribution using a directed graph and a conditional probability distribution associated with each node, which is collectively known as a Bayesian network. Resources for Article:   Further resources on this subject: Web Scraping with Python [article] Exact Inference Using Graphical Models [article] wxPython: Design Approaches and Techniques [article]
Read more
  • 0
  • 0
  • 47571

article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 21598

article-image-controls-and-widgets
Packt
10 Aug 2015
25 min read
Save for later

Controls and Widgets

Packt
10 Aug 2015
25 min read
In this article by Chip Lambert and Shreerang Patwardhan, author of the book, Mastering jQuery Mobile, we will take our Civic Center application to the next level and in the process of doing so, we will explore different widgets. We will explore the touch events provided by the jQuery Mobile framework further and then take a look at how this framework interacts with third-party plugins. We will be covering the following different widgets and topics in this article: Collapsible widget Listview widget Range slider widget Radio button widget Touch events Third-party plugins HammerJs FastClick Accessibility (For more resources related to this topic, see here.) Widgets We already made use of widgets as part of the Civic Center application. "Which? Where? When did that happen? What did I miss?" Don't panic as you have missed nothing at all. All the components that we use as part of the jQuery Mobile framework are widgets. The page, buttons, and toolbars are all widgets. So what do we understand about widgets from their usage so far? One thing is pretty evident, widgets are feature-rich and they have a lot of things that are customizable and that can be tweaked as per the requirements of the design. These customizable things are pretty much the methods and events that these small plugins offer to the developers. So all in all: Widgets are feature rich, stateful plugins that have a complete lifecycle, along with methods and events. We will now explore a few widgets as discussed before and we will start off with the collapsible widget. A collapsible widget, more popularly known as the accordion control, is used to display and style a cluster of related content together to be easily accessible to the user. Let's see this collapsible widget in action. Pull up the index.html file. We will be adding the collapsible widget to the facilities page. You can jump directly to the content div of the facilities page. We will replace the simple-looking, unordered list and add the collapsible widget in its place. Add the following code in place of the <ul>...<li></li>...</ul> portion: <div data-role="collapsibleset"> <div data-role="collapsible"> <h3>Banquet Halls</h3> <p>List of banquet halls will go here</p> </div> <div data-role="collapsible"> <h3>Sports Arena</h3> <p>List of sports arenas will go here</p> </div> <div data-role="collapsible">    <h3>Conference Rooms</h3> <p>List of conference rooms will come here</p> </div> <div data-role="collapsible"> <h3>Ballrooms</h3> <p>List of ballrooms will come here</p> </div> </div> That was pretty simple. As you must have noticed, we are creating a group of collapsibles defined by div with data-role="collapsibleset". Inside this div, we have multiple div elements each with data-role of "collapsible". These data roles instruct the framework to style div as a collapsible. Let's break individual collapsibles further. Each collapsible div has to have a heading tag (h1-h6), which acts as the title for that collapsible. This heading can be followed by any HTML structure that is required as per your application's design. In our application, we added a paragraph tag with some dummy text for now. We will soon be replacing this text with another widget—listview. Before we proceed to look at how we will be doing this, let's see what the facilities page is looking like right now: Now let's take a look at another widget that we will include in our project—the listview widget. The listview widget is a very important widget from the mobile website stand point. The listview widget is highly customizable and can play an important role in the navigation system of your web application as well. In our application, we will include listview within the collapsible div elements that we have just created. Each collapsible will hold the relevant list items which can be linked to a detailed page for each item. Without further discussion, let's take a look at the following code. We have replaced the contents of the first collapsible list item within the paragraph tag with the code to include the listview widget. We will break up the code and discuss the minute details later: <div data-role="collapsible"> <h3>Banquet Halls</h3> <p> <span>We have 3 huge banquet halls named after 3 most celebrated Chef's from across the world.</span> <ul data-role="listview" data-inset="true"> <li> <a href="#">Gordon Ramsay</a> </li> <li> <a href="#">Anthony Bourdain</a> </li> <li> <a href="#">Sanjeev Kapoor</a> </li> </ul> </p> </div> That was pretty simple, right? We replaced the dummy text from the paragraph tag with a span that has some details concerning what that collapsible list is about, and then we have an unordered list with data-role="listview" and some property called data-inset="true". We have seen several data-roles before, and this one is no different. This data-role attribute informs the framework to style the unordered list, such as a tappable button, while a data-inset property informs the framework to apply the inset appearance to the list items. Without this property, the list items would stretch from edge to edge on the mobile device. Try setting the data-inset property to false or removing the property altogether. You will see the results for yourself. Another thing worth noticing in the preceding code is that we have included an anchor tag within the li tags. This anchor tag informs the framework to add a right arrow icon on the extreme right of that list item. Again, this icon is customizable, along with its position and other styling attributes. Right now, our facilities page should appear as seen in the following image: We will now add similar listview widgets within the remaining three collapsible items. The content for the next collapsible item titled Sports Arena should be as follows. Once added, this collapsible item, when expanded, should look as seen in the screenshot that follows the code: <div data-role="collapsible">    <h3>Sports Arena</h3>    <p>        <span>We have 3 huge sport arenas named after 3 most celebrated sport personalities from across the world.       </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Sachin Tendulkar</a>            </li>            <li>                <a href="#">Roger Federer</a>            </li>            <li>                <a href="#">Usain Bolt</a>            </li>        </ul>    </p> </div> The code for the listview widgets that should be included in the next collapsible item titled Conference Rooms. Once added, this collapsible, item when expanded, should look as seen in the image that follows the code: <div data-role="collapsible">    <h3>Conference Rooms</h3>    <p>        <span>            We have 3 huge conference rooms named after 3 largest technology companies.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Google</a>            </li>            <li>                <a href="#">Twitter</a>            </li>            <li>                <a href="#">Facebook</a>            </li>        </ul>    </p> </div> The final collapsible list item – Ballrooms – should hold the following code, to include its share of the listview items: <div data-role="collapsible">    <h3>Ballrooms</h3>    <p>        <span>            We have 3 huge ball rooms named after 3 different dance styles from across the world.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Ballet</a>            </li>            <li>                <a href="#">Kathak</a>            </li>            <li>                <a href="#">Paso Doble</a>            </li>        </ul>    </p> </div> After adding these listview items, our facilities page should look as seen in the following image: The facilities page now looks much better than it did earlier, and we now understand a couple more very important widgets available in jQuery Mobile—the collapsible widget and the listview Widget. We will now explore two form widgets – slider widget and the radio buttons widget. For this, we will be enhancing our catering page. Let's build a simple tool that will help the visitors of this site estimate the food expense based on the number of guests and the type of cuisine that they choose. Let's get started then. First, we will add the required HTML, to include the slider widget and the radio buttons widget. Scroll down to the content div of the catering page, where we have the paragraph tag containing some text about the Civic Center's catering services. Add the following code after the paragraph tag: <form>    <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label>    <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50">    <fieldset data-role="controlgroup" id="cuisine-choices">        <legend style="font-weight: bold; padding: 15px 0px;">Choose your cuisine</legend>        <input type="radio" name="cuisine-choice" id="cuisine-choice-cont" value="15" checked="checked" />        <label for="cuisine-choice-cont">Continental</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-mex" value="12" />        <label for="cuisine-choice-mex">Mexican</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-ind" value="14" />        <label for="cuisine-choice-ind">Indian</label>    </fieldset>    <p>        The approximate cost will be: <span style="font-weight: bold;" id="totalCost"></span>    </p> </form> That is not much code, but we are adding and initializing two new form widgets here. Let's take a look at the code in detail: <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label> <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50"> We are initializing our first form widget here—the slider widget. The slider widget is an input element of the type range, which accepts a minimum value and maximum value and a default value. We will be using this slider to accept the number of guests. Since the Civic Center can cater to a maximum of 1,000 people, we will set the maximum limit to 1,000 and we expect that we have at least 50 guests, so we set a minimum value of 50. Since the minimum number of guests that we cater for is 50, we set the input's default value to 50. We also set the data-highlight attribute value to true, which informs the framework that the selected area on the slider should be highlighted. Next comes the group of radio buttons. The most important attribute to be considered here is the data-role="controlgroup" set on the fieldset element. Adding this data-role combines the radio buttons into one single group, which helps inform the user that one of the radio buttons is to be selected. This gives a visual indication to the user that one radio button out of the whole lot needs to be selected. The values assigned to each of the radio inputs here indicate the cost per person for that particular cuisine. This value will help us calculate the final dollar value for the number of selected guests and the type of cuisine. Whenever you are using the form widgets, make sure you have the form elements in the hierarchy as required by the jQuery Mobile framework. When the elements are in the required hierarchy, the framework can apply the required styles. At the end of the previous code snippet, we have a paragraph tag where we will populate the approximate cost of catering for the selected number of guests and the type of cuisine selected. The catering page should now look as seen in the following image. Right now, we only have the HTML widgets in place. When you drag the slider or select different radio buttons, you will only see the UI interactions of these widgets and the UI treatments that the framework applies to these widgets. However, the total cost will not be populated yet. We will need to write some JavaScript logic to determine this value, and we will take a look at this in a minute. Before moving to the JavaScript part, make sure you have all the code that is needed: Now let's take a look at the magic part of the code (read JavaScript) that is going to make our widgets usable for the visitors of this Civic Center web application. Add the following JavaScript code in the script tag at the very end of our index.html file: $(document).on('pagecontainershow', function(){    var guests = 50;    var cost = 35;    var totalCost;    $("#slider").on("slidestop", function(event, ui){        guests = $('#slider').val();        totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    $("input:radio[name=cuisine-choice]").on("click", function() {        cost = $(this).val();        var totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    function costCal(){        return guests * cost;    } }); That is a pretty small chunk of code and pretty simple too. We will be looking at a few very important events that are part of the framework and that come in very handy when developing web applications with jQuery Mobile. One of the most important things that you must have already noticed is that we are not making use of the customary $(document).on('ready', function(){ in Jquery, but something that looks as the following code: $(document).on('pagecontainershow', function(){ The million dollar question here is "why doesn't DOM already work in jQuery Mobile?" As part of jQuery, the first thing that we often learn to do is execute our jQuery code as soon as the DOM is ready, and this is identified using the $(document).ready function. In jQuery Mobile, pages are requested and injected into the same DOM as the user navigates from one page to another and so the DOM ready event is as useful as it executes only for the first page. Now we need an event that should execute when every page loads, and $(document).pagecontainershow is the one. The pagecontainershow element is triggered on the toPage after the transition animation has completed. The pagecontainershow element is triggered on the pagecontainer element and not on the actual page. In the function, we initialize the guests and the cost variables to 50 and 35 respectively, as the minimum number of guests we can have is 50 and the "Continental" cuisine is selected by default, which has a value of 35. We will be calculating the estimated cost when the user changes the number of guests or selects a different radio button. This brings us to the next part of our code. We need to get the value of the number of guests as soon as the user stops sliding the slider. jQuery Mobile provides us with the slidestop event for this very purpose. As soon as the user stops sliding, we get the value of the slider and then call the costCal function, which returns a value that is the number of guests multiplied by the cost of the selected cuisine per person. We then display this value in the paragraph at the bottom for the user to get an estimated cost. We will discuss some more about the touch events that are available as part of the jQuery Mobile framework in the next section. When the user selects a different radio button, we retrieve the value of the selected radio button, call the costCal function again, and update the value displayed in the paragraph at the bottom of our page. If you have the code correct and your functions are all working fine, you should see something similar to the following image: Input with touch We will take a look at a couple of touch events, which are tap and taphold. The tap event is triggered after a quick touch; whereas the taphold event is triggered after a sustained, long press touch. The jQuery Mobile tap event is the gesture equivalent of the standard click event that is triggered on the release of the touch gesture. The following snippet of code should help you incorporate the tap event when you need to use it in your application: $(".selector").on("tap", function(){    console.log("tap event is triggered"); }); The jQuery Mobile taphold event triggers after a sustained, complete touch event, which is more commonly known as the long press event. The taphold event fires when the user taps and holds for a minimum of 750 milliseconds. You can also change the default value, but we will come to that in a minute. First, let's see how the taphold event is used: $(".selector").on("taphold", function(){    console.log("taphold event is triggered"); }); Now to change the default value for the long press event, we need to set the value for the following piece of code: $.event.special.tap.tapholdThreshold Working with plugins A number of times, we will come across scenarios where the capabilities of the framework are just not sufficient for all the requirements of your project. In such scenarios, we have to make use of third-party plugins in our project. We will be looking at two very interesting plugins in the course of this article, but before that, you need to understand what jQuery plugins exactly are. A jQuery plugin is simply a new method that has been used to extend jQuery's prototype object. When we include the jQuery plugin as part of our code, this new method becomes available for use within your application. When selecting jQuery plugins for your jQuery Mobile web application, make sure that the plugin is optimized for mobile devices and incorporates touch events as well, based on your requirements. The first plugin that we are going to look at today is called FastClick and is developed by FT Labs. This is an open source plugin and so can be used as part of your application. FastClick is a simple, easy-to-use library designed to eliminate the 300 ms delay between a physical tap and the firing on the click event on mobile browsers. Wait! What are we talking about? What is this 300 ms delay between tap and click? What exactly are we discussing? Sure. We understand the confusion. Let's explain this 300 ms delay issue. The click events have a 300 ms delay on touch devices, which makes web applications feel laggy on a mobile device and doesn't give users a native-like feel. If you go to a site that isn't mobile-optimized, it starts zoomed out. You have to then either pinch and zoom or double tap some content so that it becomes readable. The double-tap is a performance killer, because with every tap we have to wait to see whether it might be a double tap—and this wait is 300 ms. Here is how it plays out: touchstart touchend Wait 300ms in case of another tap click This pause of 300 ms applies to click events in JavaScript, but also other click-based interactions such as links and form controls. Most mobile web browsers out there have this 300 ms delay on the click events, but now a few modern browsers such as Chrome and FireFox for Android and iOS are removing this 300 ms delay. However, if you are supporting the older Android and iOS versions, with older mobile browsers, you might want to consider including the FastClick plugin in your application, which helps resolve this problem. Let's take a look at how we can use this plugin in any web application. First, you need to download the plugin files, or clone their GitHub repository here: https://github.com/ftlabs/fastclick. Once you have done that, include a reference to the plugin's JavaScript file in your application: <script type="application/javascript" src="path/fastclick.js"></script> Make sure that the script is loaded prior to instantiating FastClick on any element of the page. FastClick recommends you to instantiate the plugin on the body element itself. We can do this using the following piece of code: $(function){    FastClick.attach(document.body); } That is it! Your application is now free of the 300 ms click delay issue and will work as smooth as a native application. We have just provided you with an introduction to the FastClick plugin. There are several more features that this plugin provides. Make sure you visit their website—https://github.com/ftlabs/fastclick—for more details on what the plugin has to offer. Another important plugin that we will look at is HammerJs. HammerJs, again is an open source library that helps recognize gestures made by touch, mouse, and pointerEvents. Now, you would say that the jQuery Mobile framework already takes care of this, so why do we need a third-party plugin again? True, jQuery Mobile supports a variety of touch events such as tap, tap and hold, and swipe, as well as the regular mouse events, but what if in our application we want to make use of some touch gestures such as pan, pinch, rotate, and so on, which are not supported by jQuery Mobile by default? This is where HammerJs comes into the picture and plays nicely along with jQuery Mobile. Including HammerJS in your web application code is extremely simple and straightforward, like the FastClick plugin. You need to download the plugin files and then add a reference to the plugin JavaScript file: <script type="application/javascript" src="path/hammer.js"></script> Once you have included the plugin, you need to create a new instance on the Hammer object and then start using the plugin for all the touch gestures you need to support: var hammerPan = new Hammer(element_name, options); hammerPan.on('pan', function(){    console.log("Inside Pan event"); }); By default, Hammer adds a set of events—tap, double tap, swipe, pan, press, pinch, and rotate. The pinch and rotate recognizers are disabled by default, but can be turned on as and when required. HammerJS offers a lot of features that you might want to explore. Make sure you visit their website—http://hammerjs.github.io/ to understand the different features the library has to offer and how you can integrate this plugin within your existing or new jQuery Mobile projects. Accessibility Most of us today cannot imagine our lives without the Internet and our smartphones. Some will even argue that the Internet is the single largest revolutionary invention of all time that has touched numerous lives across the globe. Now, at the click of a mouse or the touch of your fingertip, the world is now at your disposal, provided you can use the mouse, see the screen, and hear the audio—impairments might make it difficult for people to access the Internet. This makes us wonder about how people with disabilities would use the Internet, their frustration in doing so, and the efforts that must be taken to make websites accessible to all. Though estimates vary on this, most studies have revealed that about 15% of the world's population have some kind of disability. Not all of these people would have an issue with accessing the web, but let's assume 5% of these people would face a problem in accessing the web. This 5% is also a considerable amount of users, which cannot be ignored by businesses on the web, and efforts must be taken in the right direction to make the web accessible to these users with disabilities. jQuery Mobile framework comes with built-in support for accessibility. jQuery Mobile is built with accessibility and universal access in mind. Any application that is built using jQuery Mobile is accessible via the screen reader as well. When you make use of the different jQuery Mobile widgets in your application, unknowingly you are also adding support for web accessibility into your application. jQuery Mobile framework adds all the necessary aria attributes to the elements in the DOM. Let's take a look at how the DOM looks for our facilities page: Look at the highlighted Events button in the top right corner and its corresponding HTML (also highlighted) in the developer tools. You will notice that there are a few attributes added to the anchor tag that start with aria-. We did not add any of these aria- attributes when we wrote the code for the Events button. jQuery Mobile library takes care of these things for you. The accessibility implementation is an ongoing process and the awesome developers at jQuery Mobile are working towards improving the support every new release. We spoke about aria- attributes, but what do they really represent? WAI - ARIA stands for Web Accessibility Initiative – Accessible Rich Internet Applications. This was a technical specification published by the World Wide Web Consortium (W3C) and basically specifies how to increase the accessibility of web pages. ARIA specifies the roles, properties, and states of a web page that make it accessible to all users. Accessibility is extremely vast, hence covering every detail of it is not possible. However, there is excellent material available on the Internet on this topic and we encourage you to read and understand this. Try to implement accessibility into your current or next project even if it is not based on jQuery Mobile. Web accessibility is an extremely important thing that should be considered, especially when you are building web applications that will be consumed by a huge consumer base—on e-commerce websites for example. Summary In this article, we made use of some of the available widgets from the jQuery Mobile framework and we built some interactivity into our existing Civic Center application. The widgets that we used included the range slider, the collapsible widget, the listview widget, and the radio button widget. We evaluated and looked at how to use two different third-party plugins—FastClick and HammerJs. We concluded the article by taking a look at the concept of Web Accessibility. Resources for Article: Further resources on this subject: Creating Mobile Dashboards [article] Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 3200
article-image-understanding-hadoop-backup-and-recovery-needs
Packt
10 Aug 2015
25 min read
Save for later

Understanding Hadoop Backup and Recovery Needs

Packt
10 Aug 2015
25 min read
In this article by Gaurav Barot, Chintan Mehta, and Amij Patel, authors of the book Hadoop Backup and Recovery Solutions, we will discuss backup and recovery needs. In the present age of information explosion, data is the backbone of business organizations of all sizes. We need a complete data backup and recovery system and a strategy to ensure that critical data is available and accessible when the organizations need it. Data must be protected against loss, damage, theft, and unauthorized changes. If disaster strikes, data recovery must be swift and smooth so that business does not get impacted. Every organization has its own data backup and recovery needs, and priorities based on the applications and systems they are using. Today's IT organizations face the challenge of implementing reliable backup and recovery solutions in the most efficient, cost-effective manner. To meet this challenge, we need to carefully define our business requirements and recovery objectives before deciding on the right backup and recovery strategies or technologies to deploy. (For more resources related to this topic, see here.) Before jumping onto the implementation approach, we first need to know about the backup and recovery strategies and how to efficiently plan them. Understanding the backup and recovery philosophies Backup and recovery is becoming more challenging and complicated, especially with the explosion of data growth and increasing need for data security today. Imagine big players such as Facebook, Yahoo! (the first to implement Hadoop), eBay, and more; how challenging it will be for them to handle unprecedented volumes and velocities of unstructured data, something which traditional relational databases can't handle and deliver. To emphasize the importance of backup, let's take a look at a study conducted in 2009. This was the time when Hadoop was evolving and a handful of bugs still existed in Hadoop. Yahoo! had about 20,000 nodes running Apache Hadoop in 10 different clusters. HDFS lost only 650 blocks, out of 329 million total blocks. Now hold on a second. These blocks were lost due to the bugs found in the Hadoop package. So, imagine what the scenario would be now. I am sure you will bet on losing hardly a block. Being a backup manager, your utmost target is to think, make, strategize, and execute a foolproof backup strategy capable of retrieving data after any disaster. Solely speaking, the plan of the strategy is to protect the files in HDFS against disastrous situations and revamp the files back to their normal state, just like James Bond resurrects after so many blows and probably death-like situations. Coming back to the backup manager's role, the following are the activities of this role: Testing out various case scenarios to forestall any threats, if any, in the future Building a stable recovery point and setup for backup and recovery situations Preplanning and daily organization of the backup schedule Constantly supervising the backup and recovery process and avoiding threats, if any Repairing and constructing solutions for backup processes The ability to reheal, that is, recover from data threats, if they arise (the resurrection power) Data protection is one of the activities and it includes the tasks of maintaining data replicas for long-term storage Resettling data from one destination to another Basically, backup and recovery strategies should cover all the areas mentioned here. For any system data, application, or configuration, transaction logs are mission critical, though it depends on the datasets, configurations, and applications that are used to design the backup and recovery strategies. Hadoop is all about big data processing. After gathering some exabytes for data processing, the following are the obvious questions that we may come up with: What's the best way to back up data? Do we really need to take a backup of these large chunks of data? Where will we find more storage space if the current storage space runs out? Will we have to maintain distributed systems? What if our backup storage unit gets corrupted? The answer to the preceding questions depends on the situation you may be facing; let's see a few situations. One of the situations is where you may be dealing with a plethora of data. Hadoop is used for fact-finding semantics and data is in abundance. Here, the span of data is short; it is short lived and important sources of the data are already backed up. Such is the scenario wherein the policy of not backing up data at all is feasible, as there are already three copies (replicas) in our data nodes (HDFS). Moreover, since Hadoop is still vulnerable to human error, a backup of configuration files and NameNode metadata (dfs.name.dir) should be created. You may find yourself facing a situation where the data center on which Hadoop runs crashes and the data is not available as of now; this results in a failure to connect with mission-critical data. A possible solution here is to back up Hadoop, like any other cluster (the Hadoop command is Hadoop). Replication of data using DistCp To replicate data, the distcp command writes data to two different clusters. Let's look at the distcp command with a few examples or options. DistCp is a handy tool used for large inter/intra cluster copying. It basically expands a list of files to input in order to map tasks, each of which will copy files that are specified in the source list. Let's understand how to use distcp with some of the basic examples. The most common use case of distcp is intercluster copying. Let's see an example: bash$ hadoop distcp2 hdfs://ka-16:8020/parth/ghiya hdfs://ka-001:8020/knowarth/parth This command will expand the namespace under /parth/ghiya on the ka-16 NameNode into the temporary file, get its content, divide them among a set of map tasks, and start copying the process on each task tracker from ka-16 to ka-001. The command used for copying can be generalized as follows: hadoop distcp2 hftp://namenode-location:50070/basePath hdfs://namenode-location Here, hftp://namenode-location:50070/basePath is the source and hdfs://namenode-location is the destination. In the preceding command, namenode-location refers to the hostname and 50070 is the NameNode's HTTP server post. Updating and overwriting using DistCp The -update option is used when we want to copy files from the source that don't exist on the target or have some different contents, which we do not want to erase. The -overwrite option overwrites the target files even if they exist at the source. The files can be invoked by simply adding -update and -overwrite. In the example, we used distcp2, which is an advanced version of DistCp. The process will go smoothly even if we use the distcp command. Now, let's look at two versions of DistCp, the legacy DistCp or just DistCp and the new DistCp or the DistCp2: During the intercluster copy process, files that were skipped during the copy process have all their file attributes (permissions, owner group information, and so on) unchanged when we copy using legacy DistCp or just DistCp. This, however, is not the case in new DistCp. These values are now updated even if a file is skipped. Empty root directories among the source inputs were not created in the target folder in legacy DistCp, which is not the case anymore in the new DistCp. There is a common misconception that Hadoop protects data loss; therefore, we don't need to back up the data in the Hadoop cluster. Since Hadoop replicates data three times by default, this sounds like a safe statement; however, it is not 100 percent safe. While Hadoop protects from hardware failure on the data nodes—meaning that if one entire node goes down, you will not lose any data—there are other ways in which data loss may occur. Data loss may occur due to various reasons, such as Hadoop being highly susceptible to human errors, corrupted data writes, accidental deletions, rack failures, and many such instances. Any of these reasons are likely to cause data loss. Consider an example where a corrupt application can destroy all data replications. During the process, it will attempt to compute each replication and on not finding a possible match, it will delete the replica. User deletions are another example of how data can be lost, as Hadoop's trash mechanism is not enabled by default. Also, one of the most complicated and expensive-to-implement aspects of protecting data in Hadoop is the disaster recovery plan. There are many different approaches to this, and determining which approach is right requires a balance between cost, complexity, and recovery time. A real-life scenario can be Facebook. The data that Facebook holds increases exponentially from 15 TB to 30 PB, that is, 3,000 times the Library of Congress. With increasing data, the problem faced was physical movement of the machines to the new data center, which required man power. Plus, it also impacted services for a period of time. Data availability in a short period of time is a requirement for any service; that's when Facebook started exploring Hadoop. To conquer the problem while dealing with such large repositories of data is yet another headache. The reason why Hadoop was invented was to keep the data bound to neighborhoods on commodity servers and reasonable local storage, and to provide maximum availability to data within the neighborhood. So, a data plan is incomplete without data backup and recovery planning. A big data execution using Hadoop states a situation wherein the focus on the potential to recover from a crisis is mandatory. The backup philosophy We need to determine whether Hadoop, the processes and applications that run on top of it (Pig, Hive, HDFS, and more), and specifically the data stored in HDFS are mission critical. If the data center where Hadoop is running disappeared, will the business stop? Some of the key points that have to be taken into consideration have been explained in the sections that follow; by combining these points, we will arrive at the core of the backup philosophy. Changes since the last backup Considering the backup philosophy that we need to construct, the first thing we are going to look at are changes. We have a sound application running and then we add some changes. In case our system crashes and we need to go back to our last safe state, our backup strategy should have a clause of the changes that have been made. These changes can be either database changes or configuration changes. Our clause should include the following points in order to construct a sound backup strategy: Changes we made since our last backup The count of files changed Ensure that our changes are tracked The possibility of bugs in user applications since the last change implemented, which may cause hindrance and it may be necessary to go back to the last safe state After applying new changes to the last backup, if the application doesn't work as expected, then high priority should be given to the activity of taking the application back to its last safe state or backup. This ensures that the user is not interrupted while using the application or product. The rate of new data arrival The next thing we are going to look at is how many changes we are dealing with. Is our application being updated so much that we are not able to decide what the last stable version was? Data is produced at a surpassing rate. Consider Facebook, which alone produces 250 TB of data a day. Data production occurs at an exponential rate. Soon, terms such as zettabytes will come upon a common place. Our clause should include the following points in order to construct a sound backup: The rate at which new data is arriving The need for backing up each and every change The time factor involved in backup between two changes Policies to have a reserve backup storage The size of the cluster The size of a cluster is yet another important factor, wherein we will have to select cluster size such that it will allow us to optimize the environment for our purpose with exceptional results. Recalling the Yahoo! example, Yahoo! has 10 clusters all over the world, covering 20,000 nodes. Also, Yahoo! has the maximum number of nodes in its large clusters. Our clause should include the following points in order to construct a sound backup: Selecting the right resource, which will allow us to optimize our environment. The selection of the right resources will vary as per need. Say, for instance, users with I/O-intensive workloads will go for more spindles per core. A Hadoop cluster contains four types of roles, that is, NameNode, JobTracker, TaskTracker, and DataNode. Handling the complexities of optimizing a distributed data center. Priority of the datasets The next thing we are going to look at are the new datasets, which are arriving. With the increase in the rate of new data arrivals, we always face a dilemma of what to backup. Are we tracking all the changes in the backup? Now, if are we backing up all the changes, will our performance be compromised? Our clause should include the following points in order to construct a sound backup: Making the right backup of the dataset Taking backups at a rate that will not compromise performance Selecting the datasets or parts of datasets The next thing we are going to look at is what exactly is backed up. When we deal with large chunks of data, there's always a thought in our mind: Did we miss anything while selecting the datasets or parts of datasets that have not been backed up yet? Our clause should include the following points in order to construct a sound backup: Backup of necessary configuration files Backup of files and application changes The timeliness of data backups With such a huge amount of data collected daily (Facebook), the time interval between backups is yet another important factor. Do we back up our data daily? In two days? In three days? Should we backup small chunks of data daily, or should we back up larger chunks at a later period? Our clause should include the following points in order to construct a sound backup: Dealing with any impacts if the time interval between two backups is large Monitoring a timely backup strategy and going through it The frequency of data backups depends on various aspects. Firstly, it depends on the application and usage. If it is I/O intensive, we may need more backups, as each dataset is not worth losing. If it is not so I/O intensive, we may keep the frequency low. We can determine the timeliness of data backups from the following points: The amount of data that we need to backup The rate at which new updates are coming Determining the window of possible data loss and making it as low as possible Critical datasets that need to be backed up Configuration and permission files that need to be backed up Reducing the window of possible data loss The next thing we are going to look at is how to minimize the window of possible data loss. If our backup frequency is great then what are the chances of data loss? What's our chance of recovering the latest files? Our clause should include the following points in order to construct a sound backup: The potential to recover latest files in the case of a disaster Having a low data-loss probability Backup consistency The next thing we are going to look at is backup consistency. The probability of invalid backups should be less or even better zero. This is because if invalid backups are not tracked, then copies of invalid backups will be made further, which will again disrupt our backup process. Our clause should include the following points in order to construct a sound backup: Avoid copying data when it's being changed Possibly, construct a shell script, which takes timely backups Ensure that the shell script is bug-free Avoiding invalid backups We are going to continue the discussion on invalid backups. As you saw, HDFS makes three copies of our backup for the recovery process. What if the original backup was flawed with errors or bugs? The three copies will be corrupted copies; now, when we recover these flawed copies, the result indeed will be a catastrophe. Our clause should include the following points in order to construct a sound backup: Avoid having a long backup frequency Have the right backup process, and probably having an automated shell script Track unnecessary backups If our backup clause covers all the preceding mentioned points, we surely are on the way to making a good backup strategy. A good backup policy basically covers all these points; so, if a disaster occurs, it always aims to go to the last stable state. That's all about backups. Moving on, let's say a disaster occurs and we need to go to the last stable state. Let's have a look at the recovery philosophy and all the points that make a sound recovery strategy. The recovery philosophy After a deadly storm, we always try to recover from the after-effects of the storm. Similarly, after a disaster, we try to recover from the effects of the disaster. In just one moment, storage capacity which was a boon turns into a curse and just another expensive, useless thing. Starting off with the best question, what will be the best recovery philosophy? Well, it's obvious that the best philosophy will be one wherein we may never have to perform recovery at all. Also, there may be scenarios where we may need to do a manual recovery. Let's look at the possible levels of recovery before moving on to recovery in Hadoop: Recovery to the flawless state Recovery to the last supervised state Recovery to a possible past state Recovery to a sound state Recovery to a stable state So, obviously we want our recovery state to be flawless. But if it's not achieved, we are willing to compromise a little and allow the recovery to go to a possible past state we are aware of. Now, if that's not possible, again we are ready to compromise a little and allow it to go to the last possible sound state. That's how we deal with recovery: first aim for the best, and if not, then compromise a little. Just like the saying goes, "The bigger the storm, more is the work we have to do to recover," here also we can say "The bigger the disaster, more intense is the recovery plan we have to take." So, the recovery philosophy that we construct should cover the following points: An automation system setup that detects a crash and restores the system to the last working state, where the application runs as per expected behavior. The ability to track modified files and copy them. Track the sequences on files, just like an auditor trails his audits. Merge the files that are copied separately. Multiple version copies to maintain a version control. Should be able to treat the updates without impacting the application's security and protection. Delete the original copy only after carefully inspecting the changed copy. Treat new updates but first make sure they are fully functional and will not hinder anything else. If they hinder, then there should be a clause to go to the last safe state. Coming back to recovery in Hadoop, the first question we may think of is what happens when the NameNode goes down? When the NameNode goes down, so does the metadata file (the file that stores data about file owners and file permissions, where the file is stored on data nodes and more), and there will be no one present to route our read/write file request to the data node. Our goal will be to recover the metadata file. HDFS provides an efficient way to handle name node failures. There are basically two places where we can find metadata. First, fsimage and second, the edit logs. Our clause should include the following points: Maintain three copies of the name node. When we try to recover, we get four options, namely, continue, stop, quit, and always. Choose wisely. Give preference to save the safe part of the backups. If there is an ABORT! error, save the safe state. Hadoop provides four recovery modes based on the four options it provides (continue, stop, quit, and always): Continue: This allows you to continue over the bad parts. This option will let you cross over a few stray blocks and continue over to try to produce a full recovery mode. This can be the Prompt when found error mode. Stop: This allows you to stop the recovery process and make an image file of the copy. Now, the part that we stopped won't be recovered, because we are not allowing it to. In this case, we can say that we are having the safe-recovery mode. Quit: This exits the recovery process without making a backup at all. In this, we can say that we are having the no-recovery mode. Always: This is one step further than continue. Always selects continue by default and thus avoids stray blogs found further. This can be the prompt only once mode. We will look at these in further discussions. Now, you may think that the backup and recovery philosophy is cool, but wasn't Hadoop designed to handle these failures? Well, of course, it was invented for this purpose but there's always the possibility of a mashup at some level. Are we overconfident and not ready to take precaution, which can protect us, and are we just entrusting our data blindly with Hadoop? No, certainly we aren't. We are going to take every possible preventive step from our side. In the next topic, we look at the very same topic as to why we need preventive measures to back up Hadoop. Knowing the necessity of backing up Hadoop Change is the fundamental law of nature. There may come a time when Hadoop may be upgraded on the present cluster, as we see many system upgrades everywhere. As no upgrade is bug free, there is a probability that existing applications may not work the way they used to. There may be scenarios where we don't want to lose any data, let alone start HDFS from scratch. This is a scenario where backup is useful, so a user can go back to a point in time. Looking at the HDFS replication process, the NameNode handles the client request to write a file on a DataNode. The DataNode then replicates the block and writes the block to another DataNode. This DataNode repeats the same process. Thus, we have three copies of the same block. Now, how these DataNodes are selected for placing copies of blocks is another issue, which we are going to cover later in Rack awareness. You will see how to place these copies efficiently so as to handle situations such as hardware failure. But the bottom line is when our DataNode is down there's no need to panic; we still have a copy on a different DataNode. Now, this approach gives us various advantages such as: Security: This ensures that blocks are stored on two different DataNodes High write capacity: This writes only on a single DataNode; the replication factor is handled by the DataNode Read options: This denotes better options from where to read; the NameNode maintains records of all the locations of the copies and the distance from the NameNode Block circulation: The client writes only a single block; others are handled through the replication pipeline During the write operation on a DataNode, it receives data from the client as well as passes data to the next DataNode simultaneously; thus, our performance factor is not compromised. Data never passes through the NameNode. The NameNode takes the client's request to write data on a DataNode and processes the request by deciding on the division of files into blocks and the replication factor. The following figure shows the replication pipeline, wherein a block of the file is written and three different copies are made at different DataNode locations: After hearing such a foolproof plan and seeing so many advantages, we again arrive at the same question: is there a need for backup in Hadoop? Of course there is. There often exists a common mistaken belief that Hadoop shelters you against data loss, which gives you the freedom to not take backups in your Hadoop cluster. Hadoop, by convention, has a facility to replicate your data three times by default. Although reassuring, the statement is not safe and does not guarantee foolproof protection against data loss. Hadoop gives you the power to protect your data over hardware failures; the scenario wherein one disk, cluster, node, or region may go down, data will still be preserved for you. However, there are many scenarios where data loss may occur. Consider an example where a classic human-prone error can be the storage locations that the user provides during operations in Hive. If the user provides a location wherein data already exists and they perform a query on the same table, the entire existing data will be deleted, be it of size 1 GB or 1 TB. In the following figure, the client gives a read operation but we have a faulty program. Going through the process, the NameNode is going to see its metadata file for the location of the DataNode containing the block. But when it reads from the DataNode, it's not going to match the requirements, so the NameNode will classify that block as an under replicated block and move on to the next copy of the block. Oops, again we will have the same situation. This way, all the safe copies of the block will be transferred to under replicated blocks, thereby HDFS fails and we need some other backup strategy: When copies do not match the way NameNode explains, it discards the copy and replaces it with a fresh copy that it has. HDFS replicas are not your one-stop solution for protection against data loss. The needs for recovery Now, we need to decide up to what level we want to recover. Like you saw earlier, we have four modes available, which recover either to a safe copy, the last possible state, or no copy at all. Based on your needs decided in the disaster recovery plan we defined earlier, you need to take appropriate steps based on that. We need to look at the following factors: The performance impact (is it compromised?) How large is the data footprint that my recovery method leaves? What is the application downtime? Is there just one backup or are there incremental backups? Is it easy to implement? What is the average recovery time that the method provides? Based on the preceding aspects, we will decide which modes of recovery we need to implement. The following methods are available in Hadoop: Snapshots: Snapshots simply capture a moment in time and allow you to go back to the possible recovery state. Replication: This involves copying data from one cluster and moving it to another cluster, out of the vicinity of the first cluster, so that if one cluster is faulty, it doesn't have an impact on the other. Manual recovery: Probably, the most brutal one is moving data manually from one cluster to another. Clearly, its downsides are large footprints and large application downtime. API: There's always a custom development using the public API available. We will move on to the recovery areas in Hadoop. Understanding recovery areas Recovering data after some sort of disaster needs a well-defined business disaster recovery plan. So, the first step is to decide our business requirements, which will define the need for data availability, precision in data, and requirements for the uptime and downtime of the application. Any disaster recovery policy should basically cover areas as per requirements in the disaster recovery principal. Recovery areas define those portions without which an application won't be able to come back to its normal state. If you are armed and fed with proper information, you will be able to decide the priority of which areas need to be recovered. Recovery areas cover the following core components: Datasets NameNodes Applications Database sets in HBase Let's go back to the Facebook example. Facebook uses a customized version of MySQL for its home page and other interests. But when it comes to Facebook Messenger, Facebook uses the NoSQL database provided by Hadoop. Now, looking from that point of view, Facebook will have both those things in recovery areas and will need different steps to recover each of these areas. Summary In this article, we went through the backup and recovery philosophy and what all points a good backup philosophy should have. We went through what a recovery philosophy constitutes. We saw the modes available for recovery in Hadoop. Then, we looked at why backup is important even though HDFS provides the replication process. Lastly, we looked at the recovery needs and areas. Quite a journey, wasn't it? Well, hold on tight. These are just your first steps into Hadoop User Group (HUG). Resources for Article: Further resources on this subject: Cassandra Architecture [article] Oracle GoldenGate 12c — An Overview [article] Backup and Restore Improvements [article]
Read more
  • 0
  • 0
  • 7059

article-image-introduction-wep
Packt
10 Aug 2015
4 min read
Save for later

An Introduction to WEP

Packt
10 Aug 2015
4 min read
In this article by Marco Alamanni, author of the book, Kali Linux Wireless Penetration Testing Essentials, has explained that the WEP protocol was introduced with the original 802.11 standard as a means to provide authentication and encryption to wireless LAN implementations. It is based on the RC4 (Rivest Cipher 4) stream cypher with a preshared secret key (PSK) of 40 or 104 bits, depending on the implementation. A 24 bit pseudo-random Initialization Vector (IV) is concatenated with the preshared key to generate the per-packet keystream used by RC4 for the actual encryption and decryption processes. Thus, the resulting keystream could be 64 or 128 bits long. (For more resources related to this topic, see here.) In the encryption phase, the keystream is XORed with the plaintext data to obtain the encrypted data, while in the decryption phase the encrypted data is XORed with the keystream to obtain the plaintext data. The encryption process is shown in the following diagram: Attacks against WEP First of all, we must say that WEP is an insecure protocol and has been deprecated by the Wi-Fi Alliance. It suffers from various vulnerabilities related to the generation of the keystreams, to the use of IVs and to the length of the keys. The IV is used to add randomness to the keystream, trying to avoid the reuse of the same keystream to encrypt different packets. This purpose has not been accomplished in the design of WEP, because the IV is only 24 bits long (with 2^24 = 16,777,216 possible values) and it is transmitted in clear-text within each frame. Thus, after a certain period of time (depending on the network traffic) the same IV, and consequently the same keystream, will be reused, allowing the attacker to collect the relative cypher texts and perform statistical attacks to recover the plain texts and the key. The first well-known attack against WEP was the Fluhrer, Mantin and Shamir (FMS) attack, back in 2001. The FMS attack relies on the way WEP generates the keystreams and on the fact that it also uses weak IVs to generate weak keystreams, making possible for an attacker to collect a sufficient number of packets encrypted with these keystreams, analyze them, and recover the key. The number of IVs to be collected to complete the FMS attack is about 250,000 for 40-bit keys and 1,500,000 for 104-bit keys. The FMS attack has been enhanced by Korek, improving its performances. Andreas Klein found more correlations between the RC4 keystream and the key than the ones discovered by Fluhrer, Mantin, and Shamir, that can used to crack the WEP key. In 2007, Pyshkin, Tews, and Weinmann (PTW) extended Andreas Klein's research and improved the FMS attack, significantly reducing the number of IVs needed to successfully recover the WEP key. Indeed, the PTW attack does not rely on weak IVs like the FMS attack does and is very fast and effective. It is able to recover a 104-bit WEP key with a success probability of 50 percent using less than 40,000 frames and with a probability of 95 percent with 85,000 frames. The PTW attack is the default method used by Aircrack-ng to crack WEP keys. Both the FMS and PTW attacks need to collect quite a large number of frames to succeed and can be conducted passively, sniffing the wireless traffic on the same channel of the target AP and capturing frames. The problem is that, in normal conditions, we will have to spend quite a long time to passively collect all the necessary packets for the attacks, especially with the FMS attack. To accelerate the process, the idea is to re-inject frames in the network to generate traffic in response so that we could collect the necessary IVs more quickly. A type of frame that is suitable for this purpose is the ARP request, because the AP broadcasts it and each time with a new IV. As we are not associated with the AP, if we send frames to it directly, they are discarded and a de-authentication frame is sent. Instead, we can capture ARP requests from associated clients and retransmit them to the AP. This technique is called the ARP Request Replay attack and is also adopted by Aircrack-ng for the implementation of the PTW attack. Summary In this article, we covered the WEP protocol, the attacks that have been developed to crack the keys. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [article] What is Kali Linux [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 9674

article-image-setting-synchronous-replication
Packt
10 Aug 2015
17 min read
Save for later

Setting Up Synchronous Replication

Packt
10 Aug 2015
17 min read
In this article by the author, Hans-Jürgen Schönig, of the book, PostgreSQL Replication, Second Edition, we learn how to set up synchronous replication. In asynchronous replication, data is submitted and received by the slave (or slaves) after the transaction has been committed on the master. During the time between the master's commit and the point when the slave actually has fully received the data, it can still be lost. Here, you will learn about the following topics: Making sure that no single transaction can be lost Configuring PostgreSQL for synchronous replication Understanding and using application_name The performance impact of synchronous replication Optimizing replication for speed Synchronous replication can be the cornerstone of your replication setup, providing a system that ensures zero data loss. (For more resources related to this topic, see here.) Synchronous replication setup Synchronous replication has been made to protect your data at all costs. The core idea of synchronous replication is that a transaction must be on at least two servers before the master returns success to the client. Making sure that data is on at least two nodes is a key requirement to ensure no data loss in the event of a crash. Setting up synchronous replication works just like setting up asynchronous replication. Just a handful of parameters discussed here have to be changed to enjoy the blessings of synchronous replication. However, if you are about to create a setup based on synchronous replication, we recommend getting started with an asynchronous setup and gradually extending your configuration and turning it into synchronous replication. This will allow you to debug things more easily and avoid problems down the road. Understanding the downside to synchronous replication The most important thing you have to know about synchronous replication is that it is simply expensive. Synchronous replication and its downsides are two of the core reasons for which we have decided to include all this background information in this book. It is essential to understand the physical limitations of synchronous replication, otherwise you could end up in deep trouble. When setting up synchronous replication, try to keep the following things in mind: Minimize the latency Make sure you have redundant connections Synchronous replication is more expensive than asynchronous replication Always cross-check twice whether there is a real need for synchronous replication In many cases, it is perfectly fine to lose a couple of rows in the event of a crash. Synchronous replication can safely be skipped in this case. However, if there is zero tolerance, synchronous replication is a tool that should be used. Understanding the application_name parameter In order to understand a synchronous setup, a config variable called application_name is essential, and it plays an important role in a synchronous setup. In a typical application, people use the application_name parameter for debugging purposes, as it allows users to assign a name to their database connection. It can help track bugs, identify what an application is doing, and so on: test=# SHOW application_name; application_name ------------------ psql (1 row)   test=# SET application_name TO 'whatever'; SET test=# SHOW application_name; application_name ------------------ whatever (1 row) As you can see, it is possible to set the application_name parameter freely. The setting is valid for the session we are in, and will be gone as soon as we disconnect. The question now is: What does application_name have to do with synchronous replication? Well, the story goes like this: if this application_name value happens to be part of synchronous_standby_names, the slave will be a synchronous one. In addition to that, to be a synchronous standby, it has to be: connected streaming data in real-time (that is, not fetching old WAL records) Once a standby becomes synced, it remains in that position until disconnection. In the case of cascaded replication (which means that a slave is again connected to a slave), the cascaded slave is not treated synchronously anymore. Only the first server is considered to be synchronous. With all of this information in mind, we can move forward and configure our first synchronous replication. Making synchronous replication work To show you how synchronous replication works, this article will include a full, working example outlining all the relevant configuration parameters. A couple of changes have to be made to the master. The following settings will be needed in postgresql.conf on the master: wal_level = hot_standby max_wal_senders = 5   # or any number synchronous_standby_names = 'book_sample' hot_standby = on # on the slave to make it readable Then we have to adapt pg_hba.conf. After that, the server can be restarted and the master is ready for action. We recommend that you set wal_keep_segments as well to keep more transaction logs. We also recommend setting wal_keep_segments to keep more transaction logs on the master database. This makes the entire setup way more robust. It is also possible to utilize replication slots. In the next step, we can perform a base backup just as we have done before. We have to call pg_basebackup on the slave. Ideally, we already include the transaction log when doing the base backup. The --xlog-method=stream parameter allows us to fire things up quickly and without any greater risks. The --xlog-method=stream and wal_keep_segments parameters are a good combo, and in our opinion, should be used in most cases to ensure that a setup works flawlessly and safely. We have already recommended setting hot_standby on the master. The config file will be replicated anyway, so you save yourself one trip to postgresql.conf to change this setting. Of course, this is not fine art but an easy and pragmatic approach. Once the base backup has been performed, we can move ahead and write a simple recovery.conf file suitable for synchronous replication, as follows: iMac:slavehs$ cat recovery.conf primary_conninfo = 'host=localhost                    application_name=book_sample                    port=5432'   standby_mode = on The config file looks just like before. The only difference is that we have added application_name to the scenery. Note that the application_name parameter must be identical to the synchronous_standby_names setting on the master. Once we have finished writing recovery.conf, we can fire up the slave. In our example, the slave is on the same server as the master. In this case, you have to ensure that those two instances will use different TCP ports, otherwise the instance that starts second will not be able to fire up. The port can easily be changed in postgresql.conf. After these steps, the database instance can be started. The slave will check out its connection information and connect to the master. Once it has replayed all the relevant transaction logs, it will be in synchronous state. The master and the slave will hold exactly the same data from then on. Checking the replication Now that we have started the database instance, we can connect to the system and see whether things are working properly. To check for replication, we can connect to the master and take a look at pg_stat_replication. For this check, we can connect to any database inside our (master) instance, as follows: postgres=# x Expanded display is on. postgres=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid            | 62871 usesysid         | 10 usename         | hs application_name | book_sample client_addr     | ::1 client_hostname | client_port     | 59235 backend_start   | 2013-03-29 14:53:52.352741+01 state           | streaming sent_location   | 0/30001E8 write_location   | 0/30001E8 flush_location   | 0/30001E8 replay_location | 0/30001E8 sync_priority   | 1 sync_state       | sync This system view will show exactly one line per slave attached to your master system. The x command will make the output more readable for you. If you don't use x to transpose the output, the lines will be so long that it will be pretty hard for you to comprehend the content of this table. In expanded display mode, each column will be in one line instead. You can see that the application_name parameter has been taken from the connect string passed to the master by the slave (which is book_sample in our example). As the application_name parameter matches the master's synchronous_standby_names setting, we have convinced the system to replicate synchronously. No transaction can be lost anymore because every transaction will end up on two servers instantly. The sync_state setting will tell you precisely how data is moving from the master to the slave. You can also use a list of application names, or simply a * sign in synchronous_standby_names to indicate that the first slave has to be synchronous. Understanding performance issues At various points in this book, we have already pointed out that synchronous replication is an expensive thing to do. Remember that we have to wait for a remote server and not just the local system. The network between those two nodes is definitely not something that is going to speed things up. Writing to more than one node is always more expensive than writing to only one node. Therefore, we definitely have to keep an eye on speed, otherwise we might face some pretty nasty surprises. Consider what you have learned about the CAP theory earlier in this book. Synchronous replication is exactly where it should be, with the serious impact that the physical limitations will have on performance. The main question you really have to ask yourself is: do I really want to replicate all transactions synchronously? In many cases, you don't. To prove our point, let's imagine a typical scenario: a bank wants to store accounting-related data as well as some logging data. We definitely don't want to lose a couple of million dollars just because a database node goes down. This kind of data might be worth the effort of replicating synchronously. The logging data is quite different, however. It might be far too expensive to cope with the overhead of synchronous replication. So, we want to replicate this data in an asynchronous way to ensure maximum throughput. How can we configure a system to handle important as well as not-so-important transactions nicely? The answer lies in a variable you have already seen earlier in the book—the synchronous_commit variable. Setting synchronous_commit to on In the default PostgreSQL configuration, synchronous_commit has been set to on. In this case, commits will wait until a reply from the current synchronous standby indicates that it has received the commit record of the transaction and has flushed it to the disk. In other words, both servers must report that the data has been written safely. Unless both servers crash at the same time, your data will survive potential problems (crashing of both servers should be pretty unlikely). Setting synchronous_commit to remote_write Flushing to both disks can be highly expensive. In many cases, it is enough to know that the remote server has accepted the XLOG and passed it on to the operating system without flushing things to the disk on the slave. As we can be pretty certain that we don't lose two servers at the very same time, this is a reasonable compromise between performance and consistency with respect to data protection. Setting synchronous_commit to off The idea is to delay WAL writing to reduce disk flushes. This can be used if performance is more important than durability. In the case of replication, it means that we are not replicating in a fully synchronous way. Keep in mind that this can have a serious impact on your application. Imagine a transaction committing on the master and you wanting to query that data instantly on one of the slaves. There would still be a tiny window during which you can actually get outdated data. Setting synchronous_commit to local The local value will flush locally but not wait for the replica to respond. In other words, it will turn your transaction into an asynchronous one. Setting synchronous_commit to local can also cause a small time delay window, during which the slave can actually return slightly outdated data. This phenomenon has to be kept in mind when you decide to offload reads to the slave. In short, if you want to replicate synchronously, you have to ensure that synchronous_commit is set to either on or remote_write. Changing durability settings on the fly Changing the way data is replicated on the fly is easy and highly important to many applications, as it allows the user to control durability on the fly. Not all data has been created equal, and therefore, more important data should be written in a safer way than data that is not as important (such as log files). We have already set up a full synchronous replication infrastructure by adjusting synchronous_standby_names (master) along with the application_name (slave) parameter. The good thing about PostgreSQL is that you can change your durability requirements on the fly: test=# BEGIN; BEGIN test=# CREATE TABLE t_test (id int4); CREATE TABLE test=# SET synchronous_commit TO local; SET test=# x Expanded display is on. test=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid             | 62871 usesysid         | 10 usename         | hs application_name | book_sample client_addr     | ::1 client_hostname | client_port     | 59235 backend_start   | 2013-03-29 14:53:52.352741+01 state           | streaming sent_location   | 0/3026258 write_location   | 0/3026258 flush_location   | 0/3026258 replay_location | 0/3026258 sync_priority   | 1 sync_state       | sync   test=# COMMIT; COMMIT In this example, we changed the durability requirements on the fly. This will make sure that this very specific transaction will not wait for the slave to flush to the disk. Note, as you can see, sync_state has not changed. Don't be fooled by what you see here; you can completely rely on the behavior outlined in this section. PostgreSQL is perfectly able to handle each transaction separately. This is a unique feature of this wonderful open source database; it puts you in control and lets you decide which kind of durability requirements you want. Understanding the practical implications and performance We have already talked about practical implications as well as performance implications. But what good is a theoretical example? Let's do a simple benchmark and see how replication behaves. We are performing this kind of testing to show you that various levels of durability are not just a minor topic; they are the key to performance. Let's assume a simple test: in the following scenario, we have connected two equally powerful machines (3 GHz, 8 GB RAM) over a 1 Gbit network. The two machines are next to each other. To demonstrate the impact of synchronous replication, we have left shared_buffers and all other memory parameters as default, and only changed fsync to off to make sure that the effect of disk wait is reduced to practically zero. The test is simple: we use a one-column table with only one integer field and 10,000 single transactions consisting of just one INSERT statement: INSERT INTO t_test VALUES (1); We can try this with full, synchronous replication (synchronous_commit = on): real 0m6.043s user 0m0.131s sys 0m0.169s As you can see, the test has taken around 6 seconds to complete. This test can be repeated with synchronous_commit = local now (which effectively means asynchronous replication): real 0m0.909s user 0m0.101s sys 0m0.142s In this simple test, you can see that the speed has gone up by us much as six times. Of course, this is a brute-force example, which does not fully reflect reality (this was not the goal anyway). What is important to understand, however, is that synchronous versus asynchronous replication is not a matter of a couple of percentage points or so. This should stress our point even more: replicate synchronously only if it is really needed, and if you really have to use synchronous replication, make sure that you limit the number of synchronous transactions to an absolute minimum. Also, please make sure that your network is up to the job. Replicating data synchronously over network connections with high latency will kill your system performance like nothing else. Keep in mind that throwing expensive hardware at the problem will not solve the problem. Doubling the clock speed of your servers will do practically nothing for you because the real limitation will always come from network latency. The performance penalty with just one connection is definitely a lot larger than that with many connections. Remember that things can be done in parallel, and network latency does not make us more I/O or CPU bound, so we can reduce the impact of slow transactions by firing up more concurrent work. When synchronous replication is used, how can you still make sure that performance does not suffer too much? Basically, there are a couple of important suggestions that have proven to be helpful: Use longer transactions: Remember that the system must ensure on commit that the data is available on two servers. We don't care what happens in the middle of a transaction, because anybody outside our transaction cannot see the data anyway. A longer transaction will dramatically reduce network communication. Run stuff concurrently: If you have more than one transaction going on at the same time, it will be beneficial to performance. The reason for this is that the remote server will return the position inside the XLOG that is considered to be processed safely (flushed or accepted). This method ensures that many transactions can be confirmed at the same time. Redundancy and stopping replication When talking about synchronous replication, there is one phenomenon that must not be left out. Imagine we have a two-node cluster replicating synchronously. What happens if the slave dies? The answer is that the master cannot distinguish between a slow and a dead slave easily, so it will start waiting for the slave to come back. At first glance, this looks like nonsense, but if you think about it more deeply, you will figure out that synchronous replication is actually the only correct thing to do. If somebody decides to go for synchronous replication, the data in the system must be worth something, so it must not be at risk. It is better to refuse data and cry out to the end user than to risk data and silently ignore the requirements of high durability. If you decide to use synchronous replication, you must consider using at least three nodes in your cluster. Otherwise, it will be very risky, and you cannot afford to lose a single node without facing significant downtime or risking data loss. Summary Here, we outlined the basic concept of synchronous replication, and showed how data can be replicated synchronously. We also showed how durability requirements can be changed on the fly by modifying PostgreSQL runtime parameters. PostgreSQL gives users the choice of how a transaction should be replicated, and which level of durability is necessary for a certain transaction. Resources for Article: Further resources on this subject: Introducing PostgreSQL 9 [article] PostgreSQL – New Features [article] Installing PostgreSQL [article]
Read more
  • 0
  • 0
  • 4683
article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 7490

article-image-updating-and-building-our-masters
Packt
10 Aug 2015
20 min read
Save for later

Updating and building our masters

Packt
10 Aug 2015
20 min read
In this article by John Henry Krahenbuhl, the author of the book, Axure Prototyping Blueprints, we determine that with modification, we can use all of the masters from the previous community site. To support our new use cases, we need additional registration variables, a master to support user registration, and interactions for the creation of, and to comment on, posts. Next we will create global variables and add new masters, as well as enhance the design and interactions for each master. (For more resources related to this topic, see here.) Creating additional global variables Based on project requirements, we identified that nine global variables will be required. To create global variables, on the main menu click on Project and then click on Global Variables…. In the Global Variables dialog, perform the following steps: Click the green + sign and type Email. Click on the Default Value field and type songwriter@test.com. Repeat step 1 eight more times to create additional variables using the following table for the Variable Name and Default Value fields: Variable Name Default Value Password Grammy UserEmail   UserPassword   LoggedIn No TopicIndex 0 UserText   NewPostTopic   NewPostHeadline   Click on OK. With our global variables created, we are now ready to create new masters, as well as update the design and interactions for existing masters. We will start by adding masters to the Masters pane. Adding masters to the Masters pane We will add a total of two masters to the Masters pane. To create our masters, perform the following steps: In the Masters pane, click on the, Add Master icon ,type PostCommentary and press Enter. Again, in the Masters pane, click on the Add Master icon , type NewPost and press Enter. In the same Masters pane, right-click on the icon next to the Header master, mouse over Drop Behavior and click on Lock to Master Location. We are now ready to remodel the existing masters and complete the design and interactions for our new masters. We will start with the Header master. Enhancing our Header master Once completed, the Header master will look as follows: To update the Header master, we will add an ErrorMessage label, delete the Search widgets, and update the menu items. To update widgets on the Header master, perform the following steps: In the Masters pane, double-click on the icon  next to the Header master to open in the design area. In the Widgets pane, drag the Label widget  and place it at coordinates (730,0). Now, select the Text Field widget and type Your email or password is incorrect.. In the Widget Interactions and Notes pane, click in the Shape Name field and type ErrorMessage. In the Widget Properties and Style pane, with the Style tab selected, scroll to Font and perform the following steps: Change the font size to 8. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter FF0000. In the toolbar, click on the checkbox next to Hidden. Click on the EmailTextField at coordinates (730,10). If text is displayed on the text field, right-click and click Edit Text. All text on the widget will be highlighted, click on Delete. In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Next to Hint Text, enter Email. Click Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the PasswordTextField at coordinates (815,10). If text is displayed on the text field, right-click and click on Edit Text. All text on the widget will be highlighted, press Delete. In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Click on the drop-down menu next to Type and select Password. Next to Hint Text, enter Password. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the SearchTextField at coordinates (730,82) and then on Delete. Click on the SearchButton at coordinates (890,80) and then on Delete. Next, we will convert all the Log In widgets into a dynamic panel named LoginDP. The LoginDP will allow us to transition between states and show different content when a user logs in. To create the LoginDP, in our header, select the following widgets: Named Widget Coordinates ErrorMessage (730,0) EmailTextField (730,10) PasswordTextField (815,10) LogInButton (894,10) NewUserLink (730,30) ForgotLink (815,30) With the preceding six widgets selected, right-click and click Convert to Dynamic Panel. In the Widget Interactions and Notes pane, click on the Dynamic Panel Name field and type LogInDP. All the Log In widgets are now on State1 of the LogInDP. We will now add widgets to State2 for the LogInDP. With the Log In widgets converted into the LogInDP, we will now add and design State2. In the Widget Manager pane, under the LogInDP, right-click on State1, and in the menu, click on Add State. Click on the State icon beside  State2 twice, to open in the design area. Perform the following steps: In the Widgets pane, drag the Label widget  and place it at coordinates (0,13) and do the these steps: Type Welcome, email@test.com. In the Widget Interactions and Notes pane, click in the Shape Name field and type WelcomeLabel. In the Widget Properties and Style pane, with the Style tab selected scroll to Font, change the font size to 9, and click on the Italic icon . In the Widgets pane, drag the Button Shape widget  and place it at coordinates (164,10). Type Log Out. In the toolbar, change w: to 56 and h: to 16. In the Widget Interactions and Notes pane, click on the Shape Name field and type LogOutButton. To complete the design of the Header master, we need to rename the menu items on the HzMenu. In the Masters pane, double-click on the Header master to open in the design area. Click on the HzMenu at coordinates (250,80). Perform the following steps: Click on the first menu item and type Random Musings. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type RandomMusingsMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on Random Musings. Again, click on the first menu item and type Accolades and News. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type AccoladesMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on Accolades and News. Click on the first menu item and type About. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type AboutMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on About. We will now create a registration lightbox that will be shown when the user clicks on the NewUserLink. To display a dynamic panel in a lightbox, we will use the OnShow action with the option treat as lightbox set. We will use the Registration dynamic panel's Pin to Browser property to have the dynamic panel shown in the center and middle of the window. Learn more at http://www.axure.com/learn/dynamic-panels/basic/lightbox-tutorial. In the Masters pane, double-click on the icon  next to the Header master to open in the design area. In the Widgets pane, drag the Dynamic Panel widget  and place it at coordinates (310,200). In the toolbar, change w: to 250, h: to 250, and click on the Hidden checkbox. In the Widget Interactions and Notes pane, click on the Dynamic Panel Name field and type RegistrationLightBoxDP. In the Widget Manager pane with the Properties tab selected, click on Pin to Browser. In the Pin to Browser dialog box, click on the checkbox next to Pin to browser window and click on OK. In the Widget Manager pane, under the RegistrationLightBoxDP, click on the State icon  beside State1 twice to open in the design area. In the Widgets pane, drag the Rectangle widget  and place it at coordinates (0,0). In the Widget Interactions and Notes pane, click on the Shape Name field and type BackgroundRectangle. In the toolbar, change w: to 250 and h: to 250. Again in the Widgets pane, drag the Heading2 widget  and place it at coordinates (25,20). With the Heading2 widget selected, type Registration. In the toolbar, change w: to 141 and h: to 28. In the Widget Interactions and Notes pane, click on the Shape Name field and type RegistrationHeading. Repeat steps 8-10 to complete the design of the RegistrationLightBoxDP using the following table (* if applicable): Widget Coordinates Text* (Shown on Widget) Width* (w:) Height* (h:) Name field (In the Widget Interactions and Notes pane)   Label (25,67) Enter Email     EnterEmailLabel   Text Field (25,86)       EnterEmailField   Label (25,121) Enter Password     EnterPasswordLabel   Text Field (25,140)       EnterPasswordField   Button Shape (25,190) Submit 200 30 SubmitButton Click on the EnterEmailField text field at coordinates (25,86). In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Next to Hint Text, enter Email. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the EnterPasswordField text field at coordinates (25,140). In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Click on the drop-down menu next to Type and select Password. Next to Hint Text, enter Password. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. With the updates completed for the Header master, we are now ready to define the interactions. Refining the interactions for our Header master We will need to add additional interactions for Log In and Registration on our Header master. Interactions with our Header master will be triggered by the following named widgets and events: Dynamic Panel State Widget Event LoginDP State1 LoginButton OnClick LoginDP State1 NewUserLink OnClick LoginDP State1 ForgotLink OnClick LoginDP State2 LogOutButton OnClick RegistrationLightBoxDP State1 SubmitButton OnClick We will now define the interactions for each widget, starting with LoginButton. Defining interactions for the LoginButton When the LoginButton is clicked, the OnClick event will evaluate if the text entered in the EmailTextField and PasswordTextField equals the e-mail and password variable values. If the variables are valid, LoginDP will be set to State2 and text on the WelcomeLabel will be updated. If the variables values are not equal, we will show an error message. We will define these actions by creating two cases: ValidateUser and ShowErrorMessage. Validating the user's email and password To define the ValidateUser case for the OnClick interaction, open the LogInDP State1 in the design area. Click on the LogInButton at coordinates (164,10). In the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ValidateUser. In the Case Editor dialog, perform the following steps: You will see the Condition Builder window similar to the one shown in the following screenshot after the first and second conditions are defined: Create the first condition. Click on the Add Condition button. In the Condition Builder dialog box, in the outlined condition box, perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select EmailTextField. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select [[Email]]. Click the green + sign. Create the second condition. Click on the Add Condition button. In the Condition Builder dialog box, in the outlined condition box, perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select PasswordTextField. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select [[Password]]. Click on OK. Once the following three actions are defined, you should see the Case Editor similar to the one shown in the following screenshot: Create the first action. To set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State2. Create the second action. To set text for the WelcomeLabel, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click the checkbox next to WelcomeLabel. Under Set text to, click on the dropdown and select value. In the text field, enter Welcome, [[Email]]. Create the third action. To set value of the LoggedIn variable, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter [[Email]]. Click on OK. With the ValidateUser case completed, next we will create the ShowErrorMessage case. Creating the ShowErrorMessage case To create the ShowErrorMessage case, in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowErrorMessage. Create the action. To show the ErrorMessage label, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Show. Under Configure actions, under LoginDP dynamic panel, click on the checkbox next to ErrorMessage. Click on OK. Next, we will enable the interaction for the NewUserLink. Enabling interaction for the NewUserLink When the NewUserLink is clicked, the OnClick event will show the RegistrationLightBox dynamic panel as a lightbox, as shown in the following screenshot: With the LogInDP State1 still opened in the design area, click on the NewUserLink at coordinates (0,30). To enable the OnClick event in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowLightBox. Now, create the action; to show the RegistrationLightBox, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Show. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Next go to More options, click on the dropdown and select treat as lightbox. Click on OK. Next, we will activate interactions for the ForgotLink. Activating interactions for the ForgotLink When the ForgotLink is clicked, the OnClick event will show the RegistrationLightBox dynamic panel as a lightbox, the RegistrationHeading text will be updated to display Forgot Password? and the EnterPassworldLabel, as well as the EnterPasswordField, will be hidden. To enable the OnClick event, in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowForgotLB. In the Case Editor dialog, perform the following steps: Create the first action; to show the RegistrationLightBox, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Show. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Next, go to More options, click on the dropdown and select treat as lightbox. Create the second action; to set text for the RegistrationHeading, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click on the checkbox next to RegistrationHeading. Under Set text to, click on the dropdown and select value. In the text field, enter Forgot Password?. Create the third action; to hide the EnterPasswordLabel and EnterPasswordField, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Hide. Under Configure actions, under RegistrationLightBoxDP, click on the checkboxes next to EnterPasswordLabel and EnterPasswordField. Click on OK. We have now completed the interactions for State1 of LoginDP. Next, we will facilitate interactions for the LogOutButton. Facilitating interactions for the LogOutButton When the LogOutButton is clicked, the OnClick event will perform the following actions: Hide the ErrorMessage on the LoginDP State1 Set text for PasswordTextField and EmailTextField Set panel state for LoginDP to State1 Set variable value for LoggedIn To enable the OnClick event, open the LogInDP State2 in the design area. Click on the LogInOut at coordinates (164,10). In the Widget Interactions and Notes pane, with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type LogOut. In the Case Editor dialog, perform the following steps: Create the first action; to hide the ErrorMessage, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Hide. Under Configure actions, under LoginDP, click on the checkbox next to ErrorMessage. Create the second action; to set text for the PasswordTextField and EmailTextField, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click the checkbox next to PasswordTextField. Under Set text to, click the dropdown and select value. In the text field, clear any text shown. Under Configure actions, click the checkbox next to EmailTextField. Under Set text to, click on the dropdown and select value. In the text field, enter Email. Create the third action; to set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State1. Create the fourth action. To set variable value of LoggedIn, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter No. Click on OK. We have now completed interactions for State2 of the LoginDP. Next, we will construct interactions for the RegistrationLightBoxDP. Constructing interactions for the RegistrationLightBoxDP When the LoginButton is clicked, the OnClick event hides RegistrationLightBoxDp and sets the Email and Password variable values to the text entered in the EnterEmailField and EnterPasswordField. Also, if text on the RegistrationHeading label is equal to Registration, LoginDP will be set to State2. We will define these actions by creating two cases: UpdateVariables and ShowLogInState. Updating Variables and hiding the RegistrationLightBoxDP In the Widget Manger pane, double-click on the RegistrationLightBoxDP State1 to open in the design area. To define the UpdateVariables case for the OnClick interaction, click on the SubmitButton at coordinates (25,190). In the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type UpdateVariables. In the Case Editor dialog, perform the following steps: The following screenshot shows Case Editor with the actions defined: Create the first action; to set variable value for the Email and Password variables, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to Email. Under Set variable to, click on the first dropdown and select text on widget. Click on the second dropdown and select EnterEmailField. Under Configure actions, click on the checkbox next to Password. Under Set variable to, click on the first dropdown and select text on widget. Click on the second dropdown and select EnterPasswordField. Create the second action; to hide RegistrationLightBoxDP, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Hide. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Click on OK. With the UpdateVariables case completed, next we will create the ShowLogInState case. Creating the ShowLoginState case To create the ShowLogInState case, in the Widget Interactions and Notes pane with the Interactions tab selected click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowLogInState. In the Case Editor dialog, perform the following steps: Click on the Add Condition button to create the first condition. In the Condition Builder dialog box, go to the outlined condition box and perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select RegistrationHeadline. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select Registration. Click on OK. Create the first action; to set text for the WelcomeLabel, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click on the checkbox next to WelcomeLabel. Under Set text to, click on the dropdown and select value. In the text field, enter Welcome, [[Email]]. Click on OK. Create the second action; to set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State2. Create the third action; to set value of the LoggedIn variable, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter [[Email]]. Click on OK. Under the OnClick event, right-click on the ShowErrorMessage case and click on Toggle IF/ELSE IF. With our Header master updated, we are now ready to refresh data for our Forum repeater. Summary We learned how to leverage masters and pages from our community site to create a new blog site. We enhanced the Header master and refined the interactions for our Header master. Resources for Article: Further resources on this subject: Home Page Structure [article] Axure RP 6 Prototyping Essentials: Advanced Interactions [article] Common design patterns and how to prototype them [article]
Read more
  • 0
  • 0
  • 9264
Modal Close icon
Modal Close icon