Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-web-typography
Packt
13 Jul 2016
14 min read
Save for later

Web Typography

Packt
13 Jul 2016
14 min read
This article by Dario Calonaci, author of Practical Responsive Typography teaches you about typography: it's fascinating mysteries, sensual shapes, and everything else you wanted to know about it; this article is about to reveal everything on the subject for you!Every letter, every curve, and every shape in the written form conveys feelings; so it's important to learn everything about it if you want to be a better designer. You also need to know how readable your text is, therefore you have to set it up following some natural constraints our eyes and minds have built in, how white space influences your message, how every form should be taken into consideration in the writing of a textand this article will tell you exactly that! Plus a little more! You will also learn how to approach all of the above in today number one medium, the World Wide Web. Since 95 percent of the Web is made of typography, according toOliver Reichenstein, it's only logical that if you want to approach the Web you surely need to understand it better. Through this article, you'll learn all the basics of typography and will be introduced to it core features, such as: Anatomy Line Height Families Kerning (For more resources related to this topic, see here.) Note that typography, the art of drawing with words, is really ancient, as much as 3200 years prior to the mythological appearance of Christ and the very first book on this matter is the splendid Manuale Tipograficofrom Giambattista Bodoni, which he self-published in 1818. Taking into consideration all the old data, and the new knowledge, everything started from back then and every rule that has been born in print is still valid today, even for the different medium that the Web is. Typefaces classification The most commonly used type classification is based on the technical style and as such it's the one we are going to analyze and use. They are as follows: Serifs Serifs are referred to as such because of the small details that extend from the ending shapes of the characters; the origin of the word itself is obscure, various explanations have been given but none has been accepted as resolute. Their origin can be traced back to the Latin alphabetsof Roman times, probably because of the flares of the brush marks in corners, which were later chiseled in stone by the carvers. They generally give better readability in print than on a screen, probably because of the better definition and evolution of the former in hundreds of years, while the latter technology is, on an evolutionary path, a newborn. With the latest technologies and the high definition monitors that can rival the print definition, multiple scientific studies have been found inconclusive, showing that there is no discernible difference in readability between sans and serifs on the screen and as of today they are both used on the Web. Within this general definition, there are multiples sub-families, as Old Style or Humanist. Old Style or Humanist The oldest ones, dating as far back as the mid 1400s are recognized for the diagonal guide on which the characters are built on; these are clearly visible for example on the e and o of Adobe Jenson. Transitional Serifs They are neither antique nor modern and they date back to the 1700s and are generally numerous. They tend to abandon some of the diagonal stress, but not all of them, especially keeping the o. Georgia and Baskerville are some well-known examples. Modern Serifs Modern Serifs tend to rely on the contrast between thick and thin strokes, abandon diagonal for vertical stress, and on more straight serifs. They appeared in the late 1700s. Bodoni and Didot are certainly the most famous typefaces in this family. Slab Serifs Slab Serifs have little to no contrast between strokes, thick serifs, and sometimes appear with fixed widths, the underlying base resembles one of the sansmore. American Typewriter is the most famous typefaces in this familyas shown in the following image: Sans Serifs They are named sodue to the loss of the decorative serifs, in French "sans" stands for "without". Sans Serif isa more recent invention, since it was born in the late 18th century. They are divided into the following four sub-families: Grotesque Sans It is the earliest of the bunch; its appearance is similar to the serif with contrasted strokesbut without serifsand with angled terminals Franklin Gothic is one of the most famous typefaces in this family. Neo-Grotesque Sans It is plain looking with little to no contrast, small apertures, and horizontal terminals. They are one of the most common font styles ranging from Arial and Helvetica to Universe. Humanist font They have a friendly tone due to the calligraphic stylewith a mixture of different widths characters and, most of the times, contrasted strokes. Gill Sans being the flag-carrier. Geometric font Based on the geometric and rigorous shapes, they are more modern and are used less for body copy. They have a general simplicity but readability of their charactersis difficult. Futura is certainly the most famous geometric font. Script typefaces They are usually classified into two sub-familiesbased upon the handwriting, with cursive aspect and connected letterforms. They are as follows: Formal script Casual script Monospaced typefaces Display typefaces Formal script They are reminiscent of the handwritten letterforms common in the 17th and 18th centuries, sometimes they are also based on handwritings offamous people. They are commonly used for elevated and highly elegant designs and are certainly unusable for long body copy. Kunstler Script is a relatively recent formal script. Casual script This is less precise and tends to resemble a more modern and fast handwriting. They are as recent as the mid-twentieth century. Mistral is certainly the most famous casual script. Monospaced typefaces Almost all the aforementioned families are proportional in their style, (each character takes up space that is proportional to its width). This sub-family addresses each character width as the same, with narrower ones, such as i,just gain white space around them, sometimesresulting in weird appearances. Hence,Due to their nature and their spacing, they aren’t advised as copy typefaces, since their mono spacing can bring unwanted visual imbalance to the text. Courier is certainly the most known monospaced typeface. Display typefaces They are the broadest category and are aimed at small copy to draw attention and rarely follow rules, spreading from every one of the above families and expressing every mood. Recently even Blackletters (the very first fonts designed with the very first, physical printing machines) are being named under this category. For example, Danube and Val are just two of the multitude thatare out there: Expressing different moods In conjunction with the division of typography families, it's also really importantfor every project, both in print and web, to know what they express and why. It takes years of experience to understand those characteristics and the methodto use them correctly; here we are just addressing a very basic distinction to help you start with. Remember that in typography and type design, every curve conveys a different mood, so just be patient while studying and designing. Serifs vs Sans Serifs, through their decorations, their widths, and in and out of their every sub-family convey old and antique/traditional serious feelings, even when more modern ones are used; they certainly convey a more formal appearance. On the other hand, sans serifare aimed at a more modern and up-to-date world, conveying technological advancement, rationality, usually but not always,and less of a human feeling. They're more mechanical and colder than a serif, unless the author voluntarily designed them to be more friendly than the standard ones.. Scripts vs scripts As said, they are of two types, and as the name suggests, the division is straightforward. Vladimir is elegant, refined, upper class looking, and expressesfeelings such as respect. Arizonia on the other hand is not completely informal but is still a schizophrenic mess of strokes and a conclusionless expression of feeling; I'm not sure whether I feel amused or offended for its exaggerated confidentiality. Displaytypefaces Since theyare different in aspect from each other and the fact that there is no general rule that surrounds and defines the Display family, they can express the whole range of emotions.They can go from apathy to depression, from a complete childish involvement and joy to some suited, scary seriousness business feeling (the latter definition is usually expression of some monospaced typefaces). Like every other typeface, more specifically here, every change in weight and style brings in a new sentiment to the table: use it in bold and your content will be strong, fierce; change it to a lighter italic and it will look like its moving, ready to exit from the page. As such, they take years to master and we advice not to use them on your first web work, unless you are completely sure of what you are doing. Every font communicates differently, on a conscious as well as on a subconscious level; even within the same typeface,it all comes down to what we are accustomed to. In the case of font color, what a script does and feel in the European culture can drastically change if the same is used for advertising in the Asian market. Always do your research first. Combining typefaces Combining typefaces is a vital aspect of your projects but it's a tool that is hard to master. Generally,it is said that you should use no more than two fonts in your design. It is a good rule; but let me explain it—or better—enlarge it. While working with text for an informational text block, similar tothe one you are reading now, stick to it. You will express enough contrast and interest while stayingbalanced and the reader willnot get distracted. They will follow the flow and understand the hierarchy of what they are reading. However, as a designer, while typesetting you're not always working on a pure text block: you could be working with words on a packaging or on the web. However, if you know enough about typography and your eyes are well trained (usually after years of visual research and of designing with attention) you can break the rules. You get energy only when mixing contrasting fonts, so why not add a third one to bring in a better balance between the two? As a rule, you can combine fonts when: They are not in the same classification. You mix fonts to add contrast and energy and to inject interest and readability in your document and this is why the clash between serif and sans has been proven timeless.Working with two serifs/sans together instead works only with extensive trial and error and you should choose two fonts that carry enough differences. You can usually combine different subfamilies, for example a slab serif with a modern one or a geometric sans with a grotesque. If your scope is readability, find the same structure.A similar height and similar width works easily when choosing two classifications; but if your scope is aesthetic for small portions of text, you can try completely different structures, such as a slab serif with a geometric sans. You willsee that sometimes it does the job! Go extreme!This requires more experience to balance it out, but if you're working with display or script typefaces, it's almost impossible to find something similar without being boring or unreadable. Try to mix them with more simplistic typefaces if the starting point has a lot of decorations; you won't regret the trial! Typography properties Now that you know the families, you need to know the general rules that will make your text and their usage flow like a springtime breeze. Kerning Is the adjusting of space between two characters to achieve a visually balanced word trough anda visually equal distribution of white space. The word originates from the Latin wordcardo meaning Hinge.When letters were made of metal on wooden blocks, parts of them were built to hang off the base, thus giving space for the next character to sit closer. Tracking It is also as called letter-spacingand it is concerned with the entire word—not single characters or the whole text block—to change the density and texture in a text and to affect its readability. The word originates from the metal tracks where the wooden blocks with the characters were moved horizontally. Tracking request careful settings: too much white space and the words won't appear as single coherent blocks anymore –reduce the white space between the letters drastically and the letters themselves won't be readable. As a rule, you want your lines of text to be made of 50 to 75 characters, including dots and spaces, to achieve better readability. Some will ask you to stop your typing as soon as approximately 39 characters are reached, but I tend to differ. Ligatures According to kerning, especially on serifs, two or three character can clash together. Ligatures are born to avoid this; they are stylistic characters that combine two or three letters into one letter: Standard ligatures are naturally and functionally the most common ones and are made between fi, fl, and other letters when placed next to an f. They should be used, as they tend to make the script more legible. Discretionary ligatures are not functional, they just serve a decorative purpose. They are commonly found and designed between Th and st;as mentioned above, you should use them at your discretion. Leading Leading is the space between the baselines of your text, while line-height adds to the notions and also to the height of ascenders and descenders.The name came to be because in the ancient times, stripes of lead were used to add white space between two lines of text. There are many rules in typesetting (none of which came out as a perfect winner) and everything changes according to the typeface you're using. Mechanical print tends to add 2 points to the current measure being used, while a basic rule for digital is to scale the line-spacing as much as 120 percent of your x-height, which is called single spacing. As a rule of thumb, scale in between 120 and 180 percent and youare good to go (of course with the latter being used for typefaces with a major x-height). Just remember, the descenders should never touch the next line ascenders, otherwise the eye will perceive the text as crumpled and you will have difficulties to understand where one line ends and the other start. Summary The preceding text covers the basics of typography, which you should study and know in order to make the text in your assignment flow better. Now, you have a greater understanding of typography: what it is; what it's made of; what are its characteristics; what the brain search for and process in a text; the lengths it will go to understand it; and the alignments, spacing, and other issues that revolve around this beautiful subject. The most important rule to remember is that text is used to express something. It may be an informative reading, may be the expression of a feeling, such as a poem, or it can be something to make you feel something specifically. Every text has a feeling, every text has an inner tone of voice that can be expressed visually through typography. Usually it’s the text itself that dictates its feeling – and help you decide which and how to express it. All the preceding rules, properties, and knowledgeare means for you to express it and there's a large range of properties on the Web for you to use them. There is almost as much variety available in print with properties for leading, kerning, tracking, and typographical hierarchy all built in your browsers. Resources for Article: Further resources on this subject: Exploring Themes [article] A look into responsive design frameworks [article] Joomla! Template System [article]
Read more
  • 0
  • 0
  • 31488

article-image-working-jira
Packt
13 Jul 2016
14 min read
Save for later

Working with JIRA

Packt
13 Jul 2016
14 min read
Atlassian JIRA, as we all know, is primarily an issue tracking and project management system. Since version 7.0, JIRA also comes in different flavors—namely JIRA Core, JIRA Software and JIRA Service Desk—each packaged to cater the needs of its various user categories. JIRA Core focuses on business teams, JIRA Software on software teams and JIRA Service desk on IT and service teams. What many people do not know, though, is the power of its numerous customization capabilities, using which we can turn it into a different system altogether, much more powerful than these pre-packaged flavors! These extra capabilities can take JIRA to the next level, in addition to its core issue tracking and project tracking capabilities for which JIRA, arguably, is the best player in the market. In this article by Jobin Kuruvilla, author of the book Jira 7 Development Cookbook - Third Edition, you will learn how to write a workflow condition and also the use of Active Objects to store data. (For more resources related to this topic, see here.) Writing a workflow condition What are workflow conditions? They determine whether a workflow action is available or not. Considering the importance of a workflow in business processes and how there is a need to restrict the actions, either to a set of people (groups, roles, and so on) or based on some criteria (for example, the field is not empty), writing workflow conditions is almost inevitable. Workflow conditions are created with the help of the workflow-condition module. The following are the key attributes and elements supported. Visit https://developer.atlassian.com/jiradev/jira-platform/building-jira-add-ons/jira-plugins2-overview/jira-plugin-module-types/workflow-plugin-modules#WorkflowPluginModules-Conditions for more details: Attributes: Name Description key This should be unique within the plugin. class Class to provide contexts for rendered velocity templates. It must implement the com.atlassian.jira.plugin.workflow.WorkflowPluginConditionFactory interface. i18n-name-key The localization key for the human-readable name of the plugin module. name Human-readable name of the workflow condition. Elements: Name Description description Description of the workflow condition. condition-class Class to determine whether the user can see the workflow transition. It must implement com.opensymphony.workflow.Condition. It is recommended to extend the com.atlassian.jira.workflow.condition.AbstractJiraCondition class. resource type="velocity" Velocity templates for the workflow condition views. Getting ready As usual, create a skeleton plugin. Then create an eclipse project using the skeleton plugin and we are good to go! How to do it... In this recipe, let's assume we are going to develop a workflow condition that limits a transition only to the users belonging to a specific project role. The following are the steps to write our condition: Define the inputs needed to configure the workflow condition.We need to implement the WorkflowPluginFactory interface, which mainly exists to provide velocity parameters to the templates. It will be used to extract the input parameters that are used in defining the condition. To make it clear, the inputs here are not the inputs while performing the workflow action, but the inputs in defining the condition.The condition factory class, RoleConditionFactory in this case, extends the AbstractWorkflowPluginFactory, which implements the WorkflowPluginFactory interface. There are three abstract methods that we should implement, that is, getVelocityParamsForInput, getVelocityParamsForEdit, and getVelocityParamsForView. All of them, as the name suggests, are used for populating the velocity parameters for the different scenarios: In our example, we need to limit the workflow action to a certain project role, and so we need to select the project role while defining the condition. The three methods will be implemented as follows: private static final String ROLE = "role"; private static final String ROLES = "roles"; ... @Override protected void getVelocityParamsForEdit(Map<String, Object> velocityParams, AbstractDescriptor descriptor) {     velocityParams.put(ROLE, getRole(descriptor));     velocityParams.put(ROLES, getProjectRoles()); } @Override protected void getVelocityParamsForInput(Map<String, Object> velocityParams){     velocityParams.put(ROLES, getProjectRoles()); }   @Override protected void getVelocityParamsForView(Map<String, Object> velocityParams, AbstractDescriptor descriptor) {     velocityParams.put(ROLE, getRole(descriptor)); } Let's look at the methods in detail: getVelocityParamsForInput: This method defines the velocity parameters for input scenario, that is, when the user initially configures the workflow. In our example, we need to display all the project roles so that the user can select one to define the condition. The method getProjectRoles merely returns all the project roles and the collection of roles is then put into the velocity parameters with the key ROLES. getVelocityParamsForView: This method defines the velocity parameters for the view scenario, that is, how the user sees the condition after it is configured. In our example, we have defined a role and so we should display it to the user after retrieving it back from the workflow descriptor. If you have noticed, the descriptor, which is an instance of AbstractDescriptor, is available as an argument in the method. All we need is to extract the role from the descriptor, which can be done as follows: private ProjectRole getRole(AbstractDescriptor descriptor) {     if (!(descriptor instanceof ConditionDescriptor)){         throw new IllegalArgumentException("Descriptor must be a ConditionDescriptor.");     }     ConditionDescriptor functionDescriptor = (ConditionDescriptor) descriptor;     String role = (String) functionDescriptor.getArgs().get(ROLE);     if (role != null && role.trim().length() > 0)         return getProjectRole(role);     else        return null; } Just check if the descriptor is a condition descriptor or not, and then extract the role as shown in the preceding snippet. getVelocityParamsForEdit: This method defines the velocity parameters for the edit scenario, that is, when the user modifies the existing condition. Here we need both the options and the selected value. So, we put both the project roles collection and the selected role on to the velocity parameters. The second step isto define the velocity templates for each of the three aforementioned scenarios:input, view, andedit. We can use the same template here for input and edit with a simple check to keep the old role selected for the edit scenario. Let us look at the templates: role-condition-input.vm: Displays all project roles and highlights the already-selected one in the edit mode. In the input mode, the same template is reused, but the selected role will be null and so a null check is done: <tr> <td class="fieldLabelArea">Project Role: </td> <td nowrap> <select name="role" id="role">       #foreach ($field in $roles) <option value="${field.id}" #if ($role && (${field.id}==${role.id})) SELECTED #end >$field.name</option>            #end </select> <br><font size="1">Select the role in which the user should be present!</font> </td> </tr> role-condition.vm: Displays the selected role: #if ($role)     User should have ${role.name} Role! #else     Role Not Defined #end The third step is to write the actual condition. The condition class should extend theAbstractJiraConditionclass. Here, we need to implement the passesConditionmethod. In our case, we retrieve the project from the issue, check if the user has the appropriate project role, and return true if the user does: public boolean passesCondition(Map transientVars, Map args, PropertySet ps) {     Issue issue = getIssue(transientVars);     ApplicationUser user = getCallerUser(transientVars, args);       Project project = issue.getProjectObject();     String role = (String)args.get(ROLE);     Long roleId = new Long(role);       return projectRoleManager.isUserInProjectRole(user, projectRoleManager.getProjectRole(roleId), project); } The issue on which the condition is checked can be retrieved using thegetIssuemethod implemented in theAbstractJiraConditionclass. Similarly, the user can be retrieved using the getCallerUsermethod. In the preceding method, projectRoleManageris injected in the constructor, as we have seen before. Make sure you are using the appropriate scanner annotations for constructor injection, if the Atlassian Spring Scanner is defined in the pom.xml. See https://bitbucket.org/atlassian/atlassian-spring-scannerfor more details. We cansee that theROLEkey is used to retrieve the project role ID from the argsparameter in thepassesConditionmethod. In order for theROLEkey to be available in theargsmap, we need to override thegetDescriptorParamsmethod in the condition factory class,RoleConditionFactoryin this case. ThegetDescriptorParamsmethod returns a map of sanitized parameters, which will be passed into workflow plugin instances from the values in an array form submitted by velocity, given a set ofname:valueparameters from the plugin configuration page (that is, the input-parametersvelocity template). In our case, the method is overridden as follows: public Map<String, String> getDescriptorParams(Map<String, Object> conditionParams) {     if (conditionParams != null && conditionParams.containsKey(ROLE)) {         return MapBuilder.build(ROLE, extractSingleParam(conditionParams, ROLE));     }     // Create a 'hard coded' parameter     return MapBuilder.emptyMap(); } The method here builds a map of thekey:valuepair, where key isROLEand the value is the role value entered in the input configuration page. TheextractSingleParammethod is implemented in theAbstractWorkflowPluginFactoryclass. TheextractMultipleParamsmethod can be used if there is more than one parameter to be extracted! Allthat is left now is to populate theatlassian-plugin.xmlfile with the aforementioned components. We will use theworkflow-conditionmodule and it looks like the following block of code: <workflow-condition key="role-condition" name="Role Based Condition" i18n-name-key="role-condition.name" class="com.jtricks.jira.workflow.RoleConditionFactory"> <description key="role-condition.description">Role Based Workflow Condition</description> <condition-class>com.jtricks.jira.workflow.RoleCondition</condition-class> <resource type="velocity" name="view" location="templates/conditions/role-condition.vm"/> <resource type="velocity" name="input-parameters" location="templates/conditions/role-condition-input.vm"/> <resource type="velocity" name="edit-parameters" location="templates/conditions/role-condition-input.vm"/> </workflow-condition> Packagethe plugin and deploy it! How it works... After the plugin is deployed, we need to modify the workflow to include the condition. The following screenshot is how the condition looks when it is added initially. This, as you now know, is rendered using the input template: After the condition is added (that is, after selecting the Administratorsrole[N1] ), the view is rendered using the view template and looks as shown in the following screenshot: If you try to edit it, the screen will be rendered using the same input template and the Administrators role, or whichever role was selected earlier, will be pre-selected. After the workflow is configured, when the user goes to an issue, they will be presented with the transition only if they are a member of the project role where the issue belongs. It is while viewing the issue, that the passesCondition method in the condition class is executed. Using Active Objects to store data Active Objects represent a technology used by JIRA to allow per-plugin storage. This gives the plugin developers a real protected database where they can store the data belonging to their plugin and which other plugins won't be able to access. In this recipe, we will see how we can store an address entity in the database using Active Objects. You can read more about Active Objects at: http://java.net/projects/activeobjects/pages/Home Getting ready... Create a skeleton plugin using the Atlassian Plugin SDK[SafisEd2] . How to do it... Following are the steps to use Active Objects in the plugin: Include the Active Objects dependency in pom.xml. Add the appropriate ao version, which you can find from the Active Objects JAR bundled in your JIRA: <dependency> <groupId>com.atlassian.activeobjects</groupId> <artifactId>activeobjects-plugin</artifactId> <version>${ao.version}</version> <scope>provided</scope> </dependency> Add the Active Objects plugin module to the Atlassian plugin descriptor: <ao key="ao-module"> <description>The configuration of the Active Objects service</description> <entity>com.jtricks.entity.AddressEntity</entity> </ao> As you can see, the module has a unique key and it points to an entity we are going to define [SafisEd3] later, AddressEntity in this case. Include a component-import plugin to register ActiveObjects as a component in atlassian-plugin.xml: <component-import key="ao" name="Active Objects components" interface="com.atlassian.activeobjects.external.ActiveObjects"> <description>Access to the Active Objects service</description> </component-import> Note that this step is not required if you are using the Atlassian Spring Scanner. Instead, you can use the @ComponentImport annotation, while injecting ActiveObjects in the constructor. Define the entity to be used for data storage. The entity should be an interface and should extend the net.java.ao.Entity interface. All we need to do in this entity interface is to define [SafisEd4] getter and setter methods for the data that we need to store for this entity.For example, we need to store the name, city, and country as part of the address entity. In this case, the AddressEntity interface will look like the following: public interface AddressEntity extends Entity {       public String getName();     public void setName(String name);     public String getState();     public void setState(String state);     public String getCountry();     public void setCountry(String country); } By doing this, we have set up the entity to facilitate the storage of all the three attributes. We can now create, modify, or delete the data using the ActiveObjects component. The component can be instantiated by injecting it into the constructor: private ActiveObjects ao;   @Inject public ManageActiveObjects(@ComponentImport ActiveObjects ao) {     this.ao = ao; } A new row can be added to the database using the following piece of code: AddressEntity addressEntity = ao.create(AddressEntity.class); addressEntity.setName(name); addressEntity.setState(state); addressEntity.setCountry(country); addressEntity.save(); Details can be read either using the ID, which is the primary key, or by querying the data using a net.java.ao.Queryobject. Using the ID is as simple as is shown in the following code line: AddressEntity addressEntity = ao.get(AddressEntity.class, id); The Query object can be used as follows: AddressEntity[] addressEntities = ao.find(AddressEntity.class, Query.select().where("name = ?", name)); for (AddressEntity addressEntity : addressEntities) {     System.out.println("Name:"+addressEntity.getName()+", State:"+addressEntity.getState()+", Country:"+addressEntity.getCountry()); } Here, we are querying for all records with a given name. Once you get hold of an entity by either means, we can edit the contents simply by using the setter method: addressEntity.setState(newState); addressEntity.save(); Deleting is even simpler! ao.delete(addressEntity); How it works... Behind the scenes, separate tables are created in the JIRA database for every entity that we add. The Active Objects service interacts with these tables to do the work. If you look at the database, a table of the name AO_{SOME_HEX}_MY_OBJECT is created for every entity named MyObject belonging to a plugin with the key com.example.ao.myplugin, where: AO is a common prefix. SOME_HEX is the set of the first six characters of the hexadecimal value of the hash of the plugin key com.example.ao.myplugin. MY_OBJECT is the upper-case translation of the entity class name MyObject. For every attribute with the getter method, getSomeAttribute, defined in the entity interface, a column is created in the table with the name SOME_ATTRIBUTE using the Java Beans naming convention—separating the two words by an underscore and keeping them both in upper case. In our AddressEntity example, we have the following table, ao_a2a665_address_entity, as follows: If you navigate to Administration | System | Advanced | Plugin Data Storage, you can find out all the tables created using Active Objects, as shown here: As you can see, the table created using our example plugin is listed along with the tables created by other standard JIRA plugins. Lots more about Active Objects can be read at: https://developer.atlassian.com/docs/atlassian-platform-common-components/active-objects Summary In this article, just a couple of JIRA functionalities are explained. For more information you can refer to Jira 7 Development Cookbook, Third Edition. This book is your one-stop resource for mastering JIRA extension and customization. You will learn how to create your own JIRA plugins, customize the look-and-feel of your JIRA UI, work with workflows, issues, custom fields, and much more. Resources for Article: Further resources on this subject: Project Management [article] Making Big Data Work for Hadoop and Solr [article] JIRA – an Overview [article]
Read more
  • 0
  • 0
  • 3670

article-image-you-begin
Packt
13 Jul 2016
14 min read
Save for later

Before You Begin

Packt
13 Jul 2016
14 min read
In this article by Ashley Chiasson, the author of the book Mastering Articulate Storyline, provides you with an introduction to the purpose of this book, best practices related to e-learning product development. In this article, we will cover the following topics: Pushing Articulate Storyline to the limit Best practices How to be mindful of reusability Methods for organizing your project The differences between storyboarding and rapid development Ways of streamlining your development (For more resources related to this topic, see here.) Pushing Articulate Storyline to the limit The purpose of this book is really to get you comfortable with pushing Articulate Storyline to its limits. Doing this may also broaden your imagination, allowing you to push your creativity to its limits. There are so many things you can do within Storyline, and a lot of those features, interactions, or functions are overlooked because they just aren't used all that often. Often times, the basic functionality overshadows the more advanced functions because they're easier, they often address the need, and they take less time to learn. That's understandable, but this book is going to open your mind to many more things possible within this tool. You'll get excited, frustrated, excited again, and probably frustrated a few more times, but with all of the practical activities for you to follow along with (and or reverse engineer), you'll be mastering Articulate Storyline and pushing it to its limits within no time! If you don't quite get one of the concepts explained, don't worry. You'll always have access to this book and the activity downloads as a handy reference or refresher. Best practices Before you get too far into your development, it's important to take some steps to streamline your approach by establishing best practices—doing this will help you become more organized and efficient. Everyone has their own process, so this is by no means a prescribed format for the proper way of doing things. These are just some recommendations, from personal experience, that have proven effective as an e-learning developer. Please note that these best practices are not necessarily Storyline-related, but are best practices to consider ahead of development within any e-learning project. Your best practices will likely be project-specific in terms of how your clients or how your organization's internal processes work. Sometimes you'll be provided with storyboard ahead of development and sometimes you'll be expected to rapidly develop. Sometimes you'll be provided with all multimedia ahead of development and sometimes you'll be provided with multimedia after an alpha review. You may want to do a content dump at the beginning of your development process or you may want to work through each slide from start until finish before moving on. Through experience and observation of what other developers are doing, you will learn how to define and adapt your best practices. When a new project comes along, it's always a good idea to employ some form of organization. There are many great reasons for this, some of which include being mindful of reusability, maintaining and organizing project and file structure, and streamlining your development process. This article aims to provide you with as much information as necessary to ensure that you are effectively organizing your projects for enhanced efficiency and an understanding of why these methods should always be considered best practices. How to be mindful of reusability When I think about reusability in e-learning, I think about objects and content that can be reused in a variety of contexts. Developers often run into this when working on large projects or in industries that involve trade-specific content. When working on multiple projects within one sector, you may come across assets used previously in one course (for example, a 3D model of an aircraft) that may be reused in another course of the same content base. Being able to reuse content and/or assets can come in handy as it can save you resources in the long run. Reusing previously established assets (if permitted to do so, of course) would reduce the amount of development time various departments and/or individuals need to spend. Best practices for reusability might include creating your own content repository and defining a file naming convention that will make it easy for you to quickly find what you're looking for. If you're extra savvy, you can create a metadata-coded database, but that might require a lot more effort than you have available. While it does take extra time to either come up with a file naming convention or apply metadata tagging to all assets within your repository, the goal is to make your life easier in the long run. Much like the dreaded administrative tasks required of small business owners, it's not the most sought-after task, but it's a necessary one, especially if you truly want to optimize efficiency! Within Articulate Storyline, you may want to maintain a repository of themes and interactions as you can use elements of these assets for future development and they can save you a lot of time. Most projects, in the early stages, require an initial prototype for the client to sign off on the general look and feel. In this prototyping phase, having a repository of themes and interactions can really make the process a lot smoother because you can call on previous work in order to easily facilitate the elemental design of a new project. Storyline allows you to import content from many sources (for example, PowerPoint, Articulate Engage, Articulate Quizmaker, and more), so you don't feel limited to just reusing Storyline interactions and/or themes. Just structure your repository in an organized manner and you will be able to easily locate the files and file types that you're looking to use at a later date. Another great thing Articulate Storyline is good for when it comes to reusability is Question Banks! Most courses contain questions, knowledge checks, assessments, or whatever you want to call them, but all too seldom do people think about compiling these questions in one neat area for reuse later on. Instead, people often add new question slides, add the question, and go on their merry development way. If you're one of those people, you need to STOP. Your life will be entirely changed by the concept of question banks—if not entirely, at least a little bit, or at least the part of your life that dabbles in development will be changed in some small way. Question banks allow you to create a bank of questions (who would have thought) and call on these questions at any time for placement within your story—reusability at its finest, at least in Storyline. Methods for organizing your project Organizing your project is a necessary evil. Surely there is someone out there who loves this process, but for others who just want to develop all day and all night, there may be a smaller emphasis placed on organization. However, you can take some simple steps to organize your project that can be reused for future projects. Within Storyline, the organizational emphasis of this article will be placed on using Story View and optimizing the use of scenes. These are two elements of Storyline that, depending on the size of your project, can make a world of difference when it comes to making sense of all the content you've authored in terms of making the structure of your content more palatable. Using the Story View Story View is such a great feature of Storyline! It provides you with a bird's eye view of your project, or story, and essentially shows you a visual blueprint of all scenes and slides. This is particularly helpful in projects that involve a lot of branching. Instead of seeing the individual parts, you're seeing the parts as they represent the whole—the Gestalt psychology would be proud! You can also use Story View to plan out the movement of existing scenes or slides if content isn't lining up quite the way you want it to: Optimizing scene use Scenes play a very big role in maintaining organization within your story. They serve to group slides into smaller segments of the entire story and are typically defined using logical breaks. However, it's all up to you how you decide to group your slides. If the story you're working on consists of multiple topics or modules, each topic or module would logically become a new scene. Visually, scenes work in tandem with Story View in that while you're in Story View, you can clearly see the various scenes and move things around appropriately. Functionally, scenes serve to create submenus in the main Storyline menu, but you can change this if you don't want to see each scene delineated in the menu. From an organization and control perspective, scenes can help you reel in unwieldy and overwhelming content. This particularly comes in handy with large courses, where you can easily lose your place when trying to track down a specific slide of a scene, for example, in a sea of 150 slides. In this sense, scenes allow you to chunk content into more manageable scenes within your story and will likely allow you to save on development and revision time. Using scenes will also help when it comes to previewing your story. Instead of having to wait to load 150 slides each time you preview, you can choose to preview a scene and will only have to wait for the slides in that scene to load—perhaps 15 slides of the entire course instead of 150. Scenes really are a magical thing! Asset management Asset management is just what it sounds like—managing your assets. Now, your assets may come in many forms, for example, media assets (your draft and/or completed images/video/audio), customer furnished assets (files provided by the client, which could be raw images/video/audio/PowerPoint/Word documents, and so on.), or content output (outputs from whichever authoring tool you're using). If you've worked on large projects, you will likely relate to how unwieldy these assets can become if you don't have a system in place for keeping everything organized. This is where the management element comes into play. Structuring your folders Setting up a consistent folder structure is really important when it comes to managing your assets. Structuring your folders may seem like a daunting administrative task, but once you determine a structure that works well for you and your projects, you can copy the structure for each project. So yeah, there is a little bit of up front effort, but the headache it will save you in the long run when it comes to tracking down assets for reuse is worth the effort! Again, this folder structure is in no way prescribed, but it is a recommendation, and one that has worked well. It looks something like the following: It may look overwhelming, but it's really not that bad. There are likely more elements accounted here than you may need for your project, but all main elements are included, and you can customize it as you see fit. This is how the folder structure breaks down: Project Folder: 100 Project Management Depending on how large the project is, this folder may have subfolders, for example: Meeting Minutes Action Tracking Risk Management Contracts Invoices 200 Development This folder typically contains subfolders related to my development, for example: Client-Furnished Information (CFI) Scripts and Storyboards Scripts Audio Narration Storyboards Media Video Audio Draft Audio Final Audio Images Flash Output Quality Assurance 300 Client This folder will include anything sent to the client for review, for example: Delivered Review Comments Final Within these folders, there may be other subfolders, but this is the general structure that has proven effective for me. When it comes to filenames, you may wish to follow a file naming convention dictated by the client or follow an internal file naming convention, which indicates the project, type of media, asset number, and version number, for example, PROJECT_A_001_01. If there are multiple courses for one project, you may also want to add an arbitrary course number to keep tabs on which asset belongs to which course. Once a file naming convention has been determined, these filenames will be managed within a spreadsheet, housed within the main 200>Media folder. The basic goal of this recommended folder structure is to organize your course assets and break them into three groups to further help with the organization. If this folder structure sounds like it might be functional for your purposes, go ahead and download a ready-made version of the folder structure. Storyboarding and rapid prototyping Storyboarding and rapid prototyping will likely make their way into your development glossary, if they haven't already, so they're important concepts to discuss when it comes to streamlining your development. Through experience, you'll learn how each of these concepts can help you become more efficient, and this section will discuss some benefits and detriments of both. Storyboarding is a process wherein the sequence of an e-learning project is laid out visually or textually. This process allows instructional designers to layout the e-learning project to indicate screens, topics, teaching points, onscreen text, and media descriptions. However, storyboards may not be limited to just those elements. There are many variations. However, the previously mentioned elements are most commonly represented within a storyboard. Other elements may include audio narration script, assessment items, high-level learning objectives, filenames, source/reference images, or screenshots illustrating the anticipated media asset or screen to be developed. The good thing about storyboarding is that it allows you to organize the content and provides documentation that may be reviewed prior to entry into an authoring environment. Storyboarding provides subject matter experts with a great opportunity for ironing out textual content to ensure accuracy, and can help developers in terms of reducing small text changes once in the authoring environment. These small changes are just that, small, but they also add up quickly and can quickly throw a wrench into your well-oiled, efficient, development machine. Storyboarding also has its downsides. It is an extra step in the development process and may be perceived, by potential clients, as an additional and unnecessary expense. Because storyboards do not depict the final product, reviewers may have difficulty in reviewing content as they cannot contextualize without being able to see the final product. This can be especially true when it comes to reviewing a storyboard involving complex branching scenarios. Rapid prototyping on the other hand involves working within the authoring environment, in this case Articulate Storyline, to develop your e-learning project, slide by slide. This may occur in developing an initial prototype, but may also occur throughout the lifecycle of the project as a means for eliminating the step of storyboarding from the development process. With rapid prototyping, reviewers have added context of visuals and functionality. They are able to review a proposed version of the end product, and as such, their review comments may become more streamlined and their review may take less time to conduct. However, reviewers may also get overloaded by visual stimuli, which may hamper their ability to review for content accuracy. Additionally, rapid prototyping may become less rapid when it comes to revising complex interactions. In both situations, there are clear advantages and disadvantages, so a best practice should be to determine an appropriate way ahead with regard to development and understand which process may best suit the project for which you are authoring. Streamlining your development Storyline provides you with so many ways to streamline your development. A sampling of topics discussed includes the following: Setting up auto-save Setting up defaults Keyboard shortcuts Dockable panels Using the format painter Using the eyedropper Cue points Duplicating objects Naming objects Summary This article introduced you to the concept of pushing Articulate Storyline 2 to its limits, provided you with some tips and tricks when it comes to best practices and being mindful of reusability, identified a functional folder structure and explained the importance that organization will play in your Storyline development, explained the difference between storyboarding and rapid prototyping, and gave you a taste of some topics that may help you streamline your development process. You are now armed with all of my best advice for staying productive and organized, and you should be ready to start a new Storyline project! Resources for Article: Further resources on this subject: Data Science with R [article] Sizing and Configuring your Hadoop Cluster [article] Creating Your Own Theme—A Wordpress Tutorial [article]
Read more
  • 0
  • 0
  • 12778

article-image-responsive-applications-asynchronous-programming
Packt
13 Jul 2016
9 min read
Save for later

Responsive Applications with Asynchronous Programming

Packt
13 Jul 2016
9 min read
In this article by Dirk Strauss, author of the book C# Programming Cookbook, he sheds some light on how to handle events, exceptions and tasks in asynchronous programming, making your application responsive. (For more resources related to this topic, see here.) Handling tasks in asynchronous programming Task-Based Asynchronous Pattern (TAP) is now the recommended method to create asynchronous code. It executes asynchronously on a thread from the thread pool and does not execute synchronously on the main thread of your application. It allows us to check the task's state by calling the Status property. Getting ready We will create a task to read a very large text file. This will be accomplished using an asynchronous Task. How to do it… Create a large text file (we called ours taskFile.txt) and place it in your C:temp folder: In the AsyncDemo class, create a method called ReadBigFile() that returns a Task<TResult> type, which will be used to return an integer of bytes read from our big text file: public Task<int> ReadBigFile() { } Add the following code to open and read the file bytes. You will see that we are using the ReadAsync() method that asynchronously reads a sequence of bytes from the stream and advances the position in that stream by the number of bytes read from that stream. You will also notice that we are using a buffer to read those bytes. public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); return readBytes; } Exceptions you can expect to handle from the ReadAsync() method are ArgumentNullException, ArgumentOutOfRangeException, ArgumentException, NotSupportedException, ObjectDisposedException and InvalidOperatorException. Finally, add the final section of code just after the var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); line that uses a lambda expression to specify the work that the task needs to perform. In this case, it is to read the bytes in the file: public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); readBytes.ContinueWith(task => { if (task.Status == TaskStatus.Running) Console.WriteLine("Running"); else if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Faulted"); bigFile.Dispose(); }); return readBytes; } If not done so in the previous section, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Now, make sure that you add code to call the AsyncDemo class's ReadBigFile() method asynchronously. Remember to read the result from the method (which are the bytes read) into an integer variable: private async void button1_Click(object sender, EventArgs e) { Console.WriteLine("Start file read"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadBigFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. Clicking on the button1 button will display the outputs to our Output window. Throughout this code execution, the form remains responsive. Take note though that the information displayed in your Output window will differ from the screenshot. This is because the file you used is different from mine. How it works… The task is executed on a separate thread from the thread pool. This allows the application to remain responsive while the large file is being processed. Tasks can be used in multiple ways to improve your code. This recipe is but one example. Exception handling in asynchronous programming Exception handling in asynchronous programming has always been a challenge. This was especially true in the catch blocks. As of C# 6, you are now allowed to write asynchronous code inside the catch and finally block of your exception handlers. Getting ready The application will simulate the action of reading a logfile. Assume that a third-party system always makes a backup of the logfile before processing it in another application. While this processing is happening, the logfile is deleted and recreated. Our application, however, needs to read this logfile on a periodic basis. We, therefore, need to be prepared for the case where the file does not exist in the location we expect it in. Therefore, we will purposely omit the main logfile, so that we can force an error. How to do it… Create a text file and two folders to contain the logfiles. We will, however, only create a single logfile in the BackupLog folder. The MainLog folder will remain empty: In our AsyncDemo class, write a method to read the main logfile in the MainLog folder: private async Task<int> ReadMainLog() { var bigFile = " File.OpenRead(@"C:tempLogMainLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Main Log RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Main Log Faulted"); bigFile.Dispose(); }); return await readBytes; } Create a second method to read the backup file in the BackupLog folder: private async Task<int> ReadBackupLog() { var bigFile = " File.OpenRead(@"C:tempLogBackupLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Backup Log " RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Backup Log Faulted"); bigFile.Dispose(); }); return await readBytes; } In actual fact, we would probably only create a single method to read the logfiles, passing only the path as a parameter. In a production application, creating a class and overriding a method to read the different logfile locations would be a better approach. For the purposes of this recipe, however, we specifically wanted to create two separate methods so that the different calls to the asynchronous methods are clearly visible in the code. We will then create a main ReadLogFile() method that tries to read the main logfile. As we have not created the logfile in the MainLog folder, the code will throw a FileNotFoundException. It will then run the asynchronous method and await that in the catch block of the ReadLogFile() method (something which was impossible in the previous versions of C#), returning the bytes read to the calling code: public async Task<int> ReadLogFile() { int returnBytes = -1; try { Task<int> intBytesRead = ReadMainLog(); returnBytes = await ReadMainLog(); } catch (Exception ex) { try { returnBytes = await ReadBackupLog(); } catch (Exception) { throw; } } return returnBytes; } If not done so in the previous recipe, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click on the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning an asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Next, we will write the code to create a new instance of the AsyncDemo class and attempt to read the main logfile. In a real-world example, it is at this point that the code does not know that the main logfile does not exist: private async void button1_Click(object sender, EventArgs "e) { Console.WriteLine("Read backup file"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadLogFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + O to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. To simulate a file not found exception, we deleted the file from the MainLog folder. You will see that the exception is thrown, and the catch block runs the code to read the backup logfile instead: How it works… The fact that we can await in catch and finally blocks allows developers much more flexibility because asynchronous results can consistently be awaited throughout the application. As you can see from the code we wrote, as soon as the exception was thrown, we asynchronously read the file read method for the backup file. Summary In this article we looked at how TAP is now the recommended method to create asynchronous code. How tasks can be used in multiple ways to improve your code. This allows the application to remain responsive while the large file is being processed also how exception handling in asynchronous programming has always been a challenge and how to use catch and finally block to handle exceptions. Resources for Article: Further resources on this subject: Functional Programming in C#[article] Creating a sample C#.NET application[article] Creating a sample C#.NET application[article]
Read more
  • 0
  • 0
  • 5073

article-image-hacking-android-apps-using-xposed-framework
Packt
13 Jul 2016
6 min read
Save for later

Hacking Android Apps Using the Xposed Framework

Packt
13 Jul 2016
6 min read
In this article by Srinivasa Rao Kotipalli and Mohammed A. Imran, authors of Hacking Android, we will discuss Android security, which is one of the most prominent emerging topics today. Attacks on mobile devices can be categorized into various categories, such as exploiting vulnerabilities in the kernel, attacking vulnerable apps, tricking users to download and run malware and thus stealing personal data from the device, and running misconfigured services on the device. OWASP has also released the Mobile top 10 list, helping the community better understand mobile security as a whole. Although it is hard to cover a lot in a single article, let's look at an interesting topic: the the runtime manipulation of Android applications. Runtime manipulation is controlling application flow at runtime. There are multiple tools and techniques out there to perform runtime manipulation on Android. This article discusses using the Xposed framework to hook onto Android apps. (For more resources related to this topic, see here.) Let's begin! Xposed is a framework that enables developers to write custom modules for hooking onto Android apps and thus modifying their flow at runtime. It was released by rovo89 in 2012. It works by placing the app_process binary in /system/bin/ directory, replacing the original app_process binary. app_process is the binary responsible for starting the zygote process. Basically, when an Android phone is booted, init runs /system/bin/app_process and gives the resulting process the name Zygote. We can hook onto any process that is forked from the Zygote process using the Xposed framework. To demonstrate the capabilities of Xposed framework, I have developed a custom vulnerable application. The package name of the vulnerable app is com.androidpentesting.hackingandroidvulnapp1. The code in the following screenshot shows how the vulnerable application works: This code has a method, setOutput, that is called when the button is clicked. When setOutput is called, the value of i is passed to it as an argument. If you notice, the value of i is initialized to 0. Inside the setOutput function, there is a check to see whether the value of i is equal to 1. If it is, this application will display the text Cracked. But since the initialized value is 0, this app always displays the text You cant crack it. Running the application in an emulator looks like this: Now, our goal is to write an Xposed module to modify the functionality of this app at runtime and thus printing the text Cracked. First, download and install the Xposed APK file in your emulator. Xposed can be downloaded from the following link: http://dl-xda.xposed.info/modules/de.robv.android.xposed.installer_v32_de4f0d.apk Install this downloaded APK file using the following command: adb install [file name].apk Once you've installed this app, launch it, and you should see the following screen: At this stage, make sure that you have everything set up before you proceed. Once you are done with the setup, navigate to the Modules tab, where we can see all the installed Xposed modules. The following figure shows that we currently don't have any modules installed: We will now create a new module to achieve the goal of printing the text Cracked in the target application shown earlier. We use Android Studio to develop this custom module. Here is the step-by-step procedure to simplify the process: The first step is to create a new project in Android Studio by choosing the Add No Actvity option, as shown in the following screenshot. I named it XposedModule. The next step is to add the XposedBridgeAPI library so that we can use Xposed-specific methods within the module. Download the library from the following link: http://forum.xda-developers.com/attachment.php?attachmentid=2748878&d=1400342298 Create a folder called provided within the app directory and place this library inside the provided directory. Now, create a folder called assets inside the app/src/main/ directory, and create a new file called xposed_init.We will add contents to this file in a later step.After completing the first 3 steps, our project directory structure should look like this: Now, open the build.gradle file under the app folder, and add the following line under the dependencies section: provided files('provided/[file name of the Xposed   library.jar]') In my case, it looks like this: Create a new class and name it XposedClass, as shown here: After you're done creating a new class, the project structure should look as shown in the following screenshot: Now, open the xposed_init file that we created earlier, and place the following content in it. com.androidpentesting.xposedmodule.XposedClass This looks like the following screenshot: Now, let's provide some information about the module by adding the following content to AndroidManifest.xml: <meta-data android_name="xposedmodule" android_value="true" />   <meta-data android_name="xposeddescription" android_value="xposed module to bypass the validation" />     <meta-data android_name="xposedminversion" android_value="54" /> Make sure that you add this content to the application section as shown here: Finally, write the actual code within in the XposedClass to add a hook. Here is the piece of code that actually bypasses the validation being done in the target application: Here's what we have done in the previous code: Firstly, our class is implementing IXposedHookLoadPackage We wrote the method implementation for the handleLoadPackage method—this is mandatory when we implement IXposedHookLoadPackage We set up the string values for classToHook and functionToHook An if condition is written to see whether the package name equals the target package name If package name matches, execute the custom code provided inside beforeHookedMethod Within the beforeHookedMethod, we are setting the value of i to 1 and thus when this button is clicked, the value of i will be considered as 1, and the text Cracked will be displayed as a toast message Compile and run this application just like any other Android app, and then check the Modules section of Xposed application. You should see a new module with the name XposedModule, as shown here: Select the module and reboot the emulator. Once the emulator has restarted, run the target application and click on the Crack Me button. As you can see in the screenshot, we have modified the application's functionality at runtime without actually modifying its original code. We can also see the logs by tapping on the Logs section. You can observe the XposedBridge.log method in the source code shown previously. This is the method used to log the following data shown: Summary Xposed without a doubt is one of the best frameworks available out there. Understanding frameworks such as Xposed is essential to understanding Android application security. This article demonstrated the capabilities of the Xposed framework to manipulate the apps at runtime. A lot of other interesting things can be done using Xposed, such as bypassing root detection and SSL pinning. Further resources on this subject: Speeding up Gradle builds for Android [article] https://www.packtpub.com/books/content/incident-response-and-live-analysis [article] Mobile Forensics [article]
Read more
  • 0
  • 0
  • 37060

article-image-deploying-docker-container-cloud-part-2
Darwin Corn
13 Jul 2016
3 min read
Save for later

Deploying a Docker Container to the Cloud, Part 2

Darwin Corn
13 Jul 2016
3 min read
I previously wrote about app containerization using Docker, and if you’re unfamiliar with that concept, please read that post first. In this post, I'm going to pick up where I left off, with a fully containerized frontend ember application showcasing my music that I now want to share with the world. Speaking of that app in part 1—provided you don't have a firewall blocking port 80 inbound—if you've come straight over from the previous post, you're serving a web app to everyone on your internal network right now. You should, of course, map it to only allow 127.0.0.1 on port 80 instead of 0.0.0.0 (everyone). In this post I am going to focus on my mainstream cloud platform of choice, Google Cloud Platform (GCP). It will only cost ~$5/month, with room to house more similarly simple apps—MVPs, proofs of concept and the like. Go ahead and sign up for the free GCP trial, and create a project. Templates are useful for rapid scaling and minimizing the learning curve; but for the purpose of learning, how this actually works, and for minimizing financial impact, they're next to useless. First, you need to get the container into the private registry that comes with every GCP project. Okay, let's get started. You need to tag the image so that Google Cloud Platform knows where to put it. Then you're going to use the gcloud command-line tool to push it to that cloud registry. $ docker tag docker-demo us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo $ gcloud docker push us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo Congratulations, you have your first container in the cloud! Now let's deploy it. We're going to use Google's Compute Engine, not their Container Engine (besides the registry, but no cluster templates for us). Refer to this article, and if you're using your own app, you'll have to write up a container manifest. If you're using the docker-demo app from the first article, make sure to run a git pull to get an up-to-date version of the repo and notice that a containers.yaml manifest file has been added to the root of the application. containers.yaml apiVersion: v1 kind: Pod metadata: name: docker-demo spec: containers: - name: docker-demo image: us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo imagePullPolicy: Always ports: - containerPort: 80 hostPort: 80 That file instructs the container-vm (purpose-built for running containers)-based VM we're about to create to pull the image and run it. Now let's run the gcloud command to create the VM in the cloud that will host the image, telling it to use the manifest. $ gcloud config set project [YOUR PROJECT ID HERE] $ gcloud compute instances create docker-demo --image container-vm --metadata-from-file google-container-manifest=containers.yaml --zone us-central1-a --machine-type f1-micro Launch the GCP Developer Console and set the firewall on your shiny new VM to 'Allow HTTP traffic'. Or run the following command. $ gcloud compute instances add-tags docker-demo --tags http-server --zone us-central1-a Either way, the previous gcloud compute instances create command should've given you the External (Public) IP of the VM, and navigating there from your browser will show the app. Congrats, you've now deployed a fully containerized web application to the cloud! If you're leaving this up, remember to reserve a static IP for your VM. I recommend consulting some of the documentation I've referenced here to monitor VM and container health as well. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 12198
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 30384

article-image-working-forms-using-rest-api
Packt
11 Jul 2016
21 min read
Save for later

Working with Forms using REST API

Packt
11 Jul 2016
21 min read
WordPress, being an ever-improving content management system, is now moving toward becoming a full-fledged application framework, which brings up the necessity for new APIs. The WordPress REST API has been created to create necessary and reliable APIs. The plugin provides an easy-to-use REST API, available via HTTP that grabs your site's data in the JSON format and further retrieves it. WordPress REST API is now at its second version and has brought a few core differences, compared to its previous one, including route registration via functions, endpoints that take a single parameter, and all built-in endpoints that use a common controller. In this article by Sufyan bin Uzayr, author of the book Learning WordPress REST API, you'll learn how to write a functional plugin to create and edit posts using the latest version of the WordPress REST API. This article will also cover the process on how to work efficiently with data to update your page dynamically based on results. This tutorial comes to serve as a basis and introduction to processing form data using the REST API and AJAX and not as a redo of the WordPress post editor or a frontend editing plugin. REST API's first task is to make your WordPress powered websites more dynamic, and for this precise reason, I have created a thorough tutorial that will take you step by step in this process. After you understand how the framework works, you will be able to implement it on your sites, thus making them more dynamic. (For more resources related to this topic, see here.) Fundamentals In this article, you will be doing something similar, but instead of using the WordPress HTTP API and PHP, you'll use jQuery's AJAX methods. All of the code for that project should go in its plugin file.Another important tip before starting is to have the required JavaScript client installed that uses the WordPress REST API. You will be using the JavaScript client to make it possible to authorize via the current user's cookies. As a note for this tip would be the fact that you can actually substitute another authorization method such as OAuth if you would find it suitable. Setup the plugin During the course of this tutorial, you'll only need one PHP and one JavaScript file. Nothing else is necessary for the creation of our plugin. We will be starting off with writing a simple PHP file that will do the following three key things for us: Enqueue the JavaScript file Localize a dynamically created JavaScript object into the DOM when you use the said file Create the HTML markup for our future form All that is required of us is to have two functions and two hooks. To get this done, we will be creating a new folder in our plugin directory with one of the PHP files inside it. This will serve as the foundation for our future plugin. We will give the file a conventional name, such as my-rest-post-editor.php. In the following you can see our starting PHP file with the necessary empty functions that we will be expanding in the next steps: <?php /* Plugin Name: My REST API Post Editor */ add_shortcode( 'My-Post-EditorR', 'my_rest_post_editor_form'); function my_rest_post_editor_form( ) { } add_action( 'wp_enqueue_scripts', 'my_rest_api_scripts' ); function my_rest_api_scripts() { } For this demonstration, notice that you're working only with the post title and post content. This means that in the form editor function, you only need the HTML for a simple form for those two fields. Creating the form with HTML markup As you can notice, we are only working with the post title and post content. This makes it necessary only to have the HTML for a simple form for those two fields in the editor form function. The necessary code excerpt is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div>'; return $form; } Our aim is to show this only to those users who are logged in on the site and have the ability to edit posts. We will be wrapping the variable containing the form in some conditional checks that will allow us to fulfill the said aim. These tests will check whether the user is logged-inin the system or not, and if he's not,he will be provided with a link to the default WordPress login page. The code excerpt with the required function is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div> '; if ( is_user_logged_in() ) { if ( user_can( get_current_user_id(), 'edit_posts' ) ) { return $form; } else { return __( 'You do not have permissions to do this.', 'my-rest-post-editor' ); } } else {      return sprintf( '<a href="%1s" title="Login">%2s</a>', wp_login_url( get_permalink( get_ queried_object_id() ) ), __( 'You must be logged in to do this, please click here to log in.', 'my-rest-post-editor') ); } } To avoid confusions, we do not want our page to be processed automatically or somehow cause a page reload upon submitting it, which is why our form will not have either a method or an action set. This is an important thing to notice because that's how we are avoiding the unnecessary automatic processes. Enqueueing your JavaScript file Another necessary thing to do is to enqueue your JavaScript file. This step is important because this function provides a systematic and organized way of loading Javascript files and styles. Using the wp_enqueue_script function, you will tell WordPress when to load a script, where to load it, and what are its dependencies. By doing this, everyone utilizes the built-in JavaScript libraries that come bundled with WordPress rather than loading the same third-party script several times. Another big advantage of doing this is that it helps reduce the page load time and avoids potential code conflicts with other plugins. We use this method instead the wrong method of loading in the head section of our site because that's how we avoid loading two different plugins twice, in case we add one more manually. Once the enqueuing is done, we will be localizing an array of data into it, which you'll need to include in the JavaScript that needs to be generated dynamically. This will include the base URL for the REST API, as that can change with a filter, mainly for security purposes. Our next step is to make this piece as useable and user-friendly as possible, and for this, we will be creating both a failure and success message in an array so that our strings would be translation friendly. When done with this, you'll need to know the current user's ID and include that one in the code as well. The result we have accomplished so far is owed to the wp_enqueue_script()and wp_localize_script()functions. It would also be possible to add custom styles to the editor, and that would be achieved by using the wp_enqueue_style()function. While we have assessed the importance and functionality of wp_enqueue_script(), let's take a close look at the other ones as well. The wp_localize_script()function allows you to localize a registered script with data for a JavaScript variable. By this, we will be offered a properly localized translation for any used string within our script. As WordPress currently offers localization API in PHP; this comes as a necessary measure. Though the localization is the main use of the function, it can be used to make any data available to your script that you can usually only get from the server side of WordPress. The wp_enqueue_stylefunctionis the best solution for adding stylesheets within your WordPress plugins, as this will handle all of the stylesheets that need to be added to the page and will do it in one place. If you have two plugins using the same stylesheet and both of them use the same handle, then WordPress will only add the stylesheet on the page once. When adding things to wp_enqueue_style, it adds your styles to a list of stylesheets it needs to add on the page when it is loaded. If a handle already exists, it will not add a new stylesheet to the list. The function is as follows: function my_rest_api_scripts() { wp_enqueue_script( 'my-api-post-editor', plugins_url( 'my-api-post-editor.js', __FILE__ ), array( 'jquery' ), false, true ); wp_localize_script( 'my-api-post-editor', 'my_post_editor', array( 'root' => esc_url_raw( rest_url() ), 'nonce' => wp_create_nonce( 'wp_json' ), 'successMessage' => __( 'Post Creation Successful.', 'my-rest-post-editor' ), 'failureMessage' => __( 'An error has occurred.', 'my-rest-post-editor' ), 'userID'    => get_current_user_id(), ) ); } That will be all the PHP you need as everything else is handled via JavaScript. Creating a new page with the editor shortcode (MY-POST-EDITOR) is what you should be doing next and then proceed to that new page. If you've followed the instructions precisely, then you should see the post editor form on that page. It will obviously not be functional just yet, not before we write some JavaScript that will add functionality to it. Issuing requests for creating posts To create posts from our form, we will need to use a POST request, which we can make by using jQuery's AJAX method. This should be a familiar and very simple process for you, yet if you're not acquitted with it,you may want to take a look through the documentation and guiding offered by the guys at jQuery themselves (http://api.jquery.com/jquery.ajax/). You will also need to create two things that may be new to you, such as the JSON array and adding the authorization header. In the following, we will be walking through each of them in details. To create the JSON object for your AJAX request, you must firstly create a JavaScript array from the input and then use the JSON.stringify()to convert it into JSON. The JSON.strinfiy() method will convert a JavaScript value to a JSON string by replacing values if a replacer function is specified or optionally including only the specified properties if a replacer array is specified. The following code excerpt is the beginning of the JavaScript file that shows how to build the JSON array: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"  :title, "content_raw" :content, "status"  :'publish' };        var data = JSON.stringify(JSONObj); })(jQuery); Before passing the variable data to the AJAX request, you will have first to set the URL for the request. This step is as simple as appending wp.v2/posts to the root URL for the API, which is accessible via _POST_EDITOR.root: var url = _POST_EDITOR.root; url = url + 'wp/v2/posts'; The AJAX request will look a lot like any other AJAX request you would make, with the sole exception of the authorization headers. Because of the REST API's JavaScript client, the only thing that you will be required to do is to add a header to the request containing the nonce set in the _POST_EDITOR object. Another method that could work as an alternative would be the OAuth authorization method. Nonce is an authorization method that generates a number for specific use, such as a session authentication. In this context, nonce stands for number used once or number once. OAuth authorization method OAuth authorization method provides users with secure access to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing any user credentials. It is important to state that is has been designed to work with HTTP protocols, allowing an authorization server to issue access tokens to third-party clients. The third party would then use the access token to access the protected resources hosted on the server. Using the nonce method to verify cookie authentication involves setting a request header with the name X-WP-Nonce, which will contain the said nonce value. You can then use the beforeSend function of the request to send the nonce. Following is what that looks like in the AJAX request: $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, }); As you might have noticed, the only missing things are the functions that would display success and failure. These alerts can be easily created by using the messages that we localized into the script earlier. We will now output the result of the provided request as a simple JSON array so that we would see how it looks like. Following is the complete code for the JavaScript to create a post editor that can now create new posts: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"   :title, "content_raw" :content, "status"   :'publish' };        var data = JSON.stringify(JSONObj);        var url = MY_POST_EDITOR.root; url += 'wp/v2/posts';        $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {                alert( MY_POST_EDITOR.successMessage );                $( "#results").append( JSON.stringify( response ) ); }, failure: function( response ) {                alert( MY_POST_EDITOR.failureMessage ); } }); }); })(jQuery); This is how we can create a basic editor in WP REST API. If you are a logged in and the API is still active, you should create a new post and then create an alert telling you that the post has been created. The returned JSON object would then be placed into the #results container. Insert image_B05401_04_01.png If you followed each and every step precisely, you should now have a basic editor ready. You may want to give it a try and see how it works for you. So far, we have created and set up a basic editor that allows you to create posts. In our next steps, we will go through the process of adding functionality to our plugin, which will enable us to edit existing posts. Issuing requests for editing posts In this section, we will go together through the process of adding functionality to our editor so that we could edit existing posts. This part may be a little bit more detailed, mainly because the first part of our tutorial covered the basics and setup of the editor. To edit posts, we would need to have the following two things: A list of posts by author, with all of the posts titles and post content A new form field to hold the ID of the post you're editing As you can understand, the list of posts by author and the form field would lay the foundation for the functionality of editing posts. Before adding that hidden field to your form, add the following HTMLcode: <input type="hidden" name="post-id" id="post-id" value=""> In this step, we will need to get the value of the field for creating new posts. This will be achieved by writing a few lines of code in the JavaScript function. This code will then allow us to automatically change the URL, thus making it possible to edit the post of the said ID, rather than having to create a new one every time we would go through the process. This would be easily achieved by writing down a simple code piece, like the following one: var postID = $( '#post-id').val(); if ( undefined !== postID ) { url += '/';    url += postID; } As we move on, the preceding code will be placed before the AJAX section of the editor form processor. It is important to understand that the variable URL in the AJAX function will have the ID of the post that you are editing only if the field has value as well. The case in which no such value is present for the field, it will yield in the creation of a new post, which would be identical to the process you have been taken through previously. It is important to understand that to populate the said field, including the post title and post content field, you will be required to add a second form. This will result in all posts to be retrieved by the current user, by using a GET request. Based on the selection provided in the said form, you can set the editor form to update. In the PHP, you will then add the second form, which will look similar to the following: <form id="select-post"> <select id="posts" name="posts"> </select> <input type="submit" value="Select a Post to edit" id="choose-post"> </form> REST API will now be used to populate the options within the #posts select. For us to achieve that, we will have to create a request for posts by the current user. To accomplish our goal, we will be using the available results. We will now have to form the URL for requesting posts by the current user, which will happen if you will set the current user ID as a part of the _POST_EDITOR object during the processes of the script setup. A function needs to be created to get posts by the current author and populate the select field. This is very similar to what we did when we made our posts update, yet it is way simpler. This function will not require any authentication, and given the fact that you have already been taken through the process of creating a similar function, creating this one shouldn't be any more of a hassle for you. The success function loops through the results and adds them to the postselector form as options for its one field and will generate a similar code, something like the following: function getPostsByUser( defaultID ) {    url += '?filter[author]=';    url += my_POST_EDITOR.userID;    url += '&filter[per_page]=20';    $.ajax({ type:"GET", url: url, dataType : 'json', success: function(response) { var posts = {}; $.each(response, function(i, val) {                $( "#posts" ).append(new Option( val.title, val.ID ) ); });            if ( undefined != defaultID ) {                $('[name=posts]').val( defaultID ) } } }); } You can notice that the function we have created has one of the parameters set for defaultID, but this shouldn't be a matter of concern for you just now. The parameter, if defined, would be used to set the default value of the select field, yet, for now, we will ignore it. We will use the very same function, but without the default value, and will then set it to run on document ready. This is simply achieved by a small piece of code like the following: $( document ).ready( function() {    getPostsByUser(); }); Having a list of posts by the current user isn't enough, and you will have to get the title and the content of the selected post and push it into the form for further editing. This is will assure the proper editing possibility and make it possible to achieve the projected result. Moving on, we will need the other GET request to run on the submission of the postselector form. This should be something of the kind: $( '#select-post' ).on( 'submit', function(e) {    e.preventDefault();    var ID = $( '#posts' ).val();    var postURL = MY_POST_EDITOR.root; postURL += 'wp/v2/posts/';    postURL += ID;    $.ajax({ type:"GET", url: postURL, dataType : 'json', success: function(post) { var title = post.title; var content = post.content;            var postID = postID; $( '#editor #title').val( title ); $( '#editor #content').val( content );            $( '#select-post #posts').val( postID ); } }); }); In the form of <json-url>wp/v2/posts/<post-id>, we will build a new URL that will be used to scrape post data for any selected post. This will result in us making an actual request that will be used to take the returned data and then set it as the value of any of the three fields there in the editor form. Upon refreshing the page, you will be able to see all posts by the current user in a specific selector. Submitting the data by a click will yield in the following: The content and title of the post that you have selected will be visible to the editor, given that you have followed the preceding steps correctly. And the second occurrence will be in the fact that the hidden field for the post ID you have added should now be set. Even though the content and title of the post will be visible, we would still be unable to edit the actual posts as the function for the editor form was not set for this specific purpose, just yet. To achieve that, we will need to make a small modification to the function that will make it possible for the content to be editable. Besides, at the moment, we would only get our content and title displayed in raw JSON data; however, applying the method described previously will improve the success function for that request so that the title and content of the post displays in the proper container, #results. In order to achieve this, you will need a function that is going to update the said container with the appropriate data. The code piece for this function will be something like the following: function results( val ) { $( "#results").empty();        $( "#results" ).append( '<div class="post-title">' + val.title + '</div>'  );        $( "#results" ).append( '<div class="post-content">' + val.content + '</div>'  ); } The preceding code makes use of some very simple jQuery techniques, but that doesn't make it any worse as a proper introduction to updating page content by making use of data from the REST API. There are countless ways of getting a lot more detailed or creative with this if you dive in the markup or start adding any additional fields. That will always be an option for you if you're more of a savvy developer, but as an introductory tutorial, we're trying not to keep this tutorial extremely technical, which is why we'll stick to the provided example for now. Insert image_B05401_04_02.png As we move forward, you can use it in your modified form procession function, which will be something like the following: $( '#editor' ).on( 'submit', function(e) {    e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val(); console.log( content );    var JSONObj = { "title" "content_raw" "status" }; :title, :content, :'publish'    var data = JSON.stringify(JSONObj);    var postID = $( '#post-id').val();    if ( undefined !== postID ) { url += '/';        url += postID; }    $.ajax({        type:"POST", url: url, dataType : 'json', data: data,        beforeSend : function( xhr ) {            xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {            alert( MY_POST_EDITOR.successMessage );            getPostsByUser( response.ID ); results( response ); }, failure: function( response ) {            alert( MY_POST_EDITOR.failureMessage ); } }); }); As you have noticed, a few changes have been applied, and we will go through each of them in specific: The first thing that has changed is the Post ID that's being edited is now conditionally added. This implies that we will make use of the form and it will serve to create new posts by POSTing to the endpoint. Another change with the POST ID is that it will now update posts via posts/<post-id>. The second change regards the success function. A new result() function was used to output the post title and content during the process of editing. Another thing is that we also reran the getPostsbyUser() function, yet it has been set in a way that posts will automatically offer the functionality of editing, just after you will createthem. Summary With this, we havefinishedoff this article, and if you have followed each step with precision, you should now have a simple yet functional plugin that can create and edit posts by using the WordPress REST API. This article also covered techniques on how to work with data in order to update your page dynamically based on the available results. We will now progress toward further complicated actions with REST API. Resources for Article: Further resources on this subject: Implementing a Log-in screen using Ext JS [article] Cluster Computing Using Scala [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 50652

article-image-stacked-denoising-autoencoders
Packt
11 Jul 2016
13 min read
Save for later

Stacked Denoising Autoencoders

Packt
11 Jul 2016
13 min read
In this article by John Hearty, author of the book Advanced Machine Learning with Python, we discuss autoencoders as valuable tools in themselves, significant accuracy can be obtained by stacking autoencoders to form a deep network. This is achieved by feeding the representation created by the encoder on one layer into the next layer's encoder as input to that layer. (For more resources related to this topic, see here.) Stacked DenoisingAutoencoders(SdA) are currently in use in many leading data science teams for sophisticated natural language analyses as well as a broad range of signals, images, and text analyses. The implementation of SdA will be very familiar after the previous chapter's discussion of deep belief networks. The SdA is usedin much the same way as the RBMs in our deep belief networks were used. Each layer of the deep architecture will have a dA and sigmoid component, with the autoencoder component being used to pretrain the sigmoid network. The performance measure used by anSdA is the training set error with an intensive period of layer-to-layer (layer-wise) pretraining used to gradually align network parameters before a final period of fine-tuning. During fine-tuning, the network is trained using validation and test data, over fewer epochs but with larger update steps. The goal is to have the network converge at the end of the fine-tuning to deliver an accurate result. In addition to delivering on the typical advantages of deep networks (the ability to learn feature representations for complex or high-dimensional datasets and train a model without extensive feature engineering), stacked autoencoders have an additional, very interesting property. Correctly configured stacked autoencoders can capture a hierarchical grouping of their input data. Successive layers of anSdA may learn increasingly high-level features. While the first layer might learn some first-order features from input data (such as learning edges in a photo image), a second layer may learn some grouping of first-order features (for instance, by learning given configurations of edges that correspond to contours or structural elements in the input image). There's no golden rule to determine how many layers or how large layers should be for a given problem. The best solution is usually to experiment with these model parameters until you find an optimal point. This experimentation is best done with a hyperparameter optimization technique or genetic algorithm (subjects we'll discuss in later chapters of this book). Higher layers may learn increasingly high-order configurations, enabling anSdA to learn to recognise facial features, alphanumerical characters, or the generalised forms of objects (such as a bird). This is what gives SdAs their unique capability to learn very sophisticated, high-level abstractions of their input data. Autoencoders can be stacked indefinitely, and it has been demonstrated that continuing to stack autoencoders can improve the effectiveness of the deep architecture (with the main constraint becoming computing cost in time). In this chapter, we'll look at stacking three autoencoders to solve a natural language processing challenge. Applying SdA Now that we've had a chance to understand the advantages and power of the SdA as a deep learning architecture, let's test our skills on a real-world dataset. For this chapter, let's step away from image datasets and work with the OpinRank Review Dataset, a text dataset of around 259,000 hotel reviews from TripAdvisor,which is accessible via the UCI Machine Learning dataset Repository. This freely-available dataset provides review scores (as floating point numbers from 1 to 5) and review text for a broad range of hotels; we'll be applying our SdA to attempt to identify the scoring of each hotel from its review text. We'll be applying our autoencoder to analyze a preprocessed version of this data, which is accessible from the GitHub share accompanying this chapter. We'll be discussing the techniques by which we prepare text data in an upcoming chapter. The source data is available at https://archive.ics.uci.edu/ml/datasets/OpinRank+Review+Dataset. In order to get started, we're going to need anSdA (hereafter SdA) class! classSdA(object):     def __init__( self, numpy_rng, theano_rng=None, n_ins=280, hidden_layers_sizes=[500, 500], n_outs=5, corruption_levels=[0.1, 0.1]     ): As we previously discussed, the SdA is created by feeding the encoding from one layer's autoencoder as the input to the subsequent layer. This class supports the configuration of the layer count (reflected in, but not set by, the length of the hidden_layers_sizes and corruption_levels vectors). It also supports differentiated layer sizes (in nodes) at each layer, which can be set using hidden_layers_sizes. As we discussed, the ability to configure successive layers of the autoencoder is critical to developing successful representations. Next, we need parameters to store the MLP (self.sigmoid_layers) and dA (self.dA_layers) elements of the SdA. In order to specify the depth of our architecture, we use the self.n_layers parameter to specify the number of sigmoid and dA layers required: self.sigmoid_layers = [] self.dA_layers = [] self.params = [] self.n_layers = len(hidden_layers_sizes)   assertself.n_layers> 0 Next, we need to construct our sigmoid and dAlayers. We begin by setting the hidden layer size to be set either from the input vector size or by the activation of the preceding layer. Following this, sigmoid_layers and dA_layers components are created with the dA layer drawing from the dA class we discussed earlier in this article: for i in xrange(self.n_layers): if i == 0: input_size = n_ins else: input_size = hidden_layers_sizes[i - 1]   if i == 0: layer_input = self.x else: layer_input = self.sigmoid_layers[-1].output   sigmoid_layer = HiddenLayer(rng=numpy_rng, input=layer_input, n_in=input_size, n_out=hidden_layers_sizes[i], activation=T.nnet.sigmoid) self.sigmoid_layers.append(sigmoid_layer) self.params.extend(sigmoid_layer.params)   dA_layer = dA(numpy_rng=numpy_rng, theano_rng=theano_rng, input=layer_input, n_visible=input_size, n_hidden=hidden_layers_sizes[i],                           W=sigmoid_layer.W, bhid=sigmoid_layer.b) self.dA_layers.append(dA_layer) Having implemented the layers of our SdA, we'll need a final, logistic regression layer to complete the MLP component of the network: self.logLayer = LogisticRegression( input=self.sigmoid_layers[-1].output, n_in=hidden_layers_sizes[-1], n_out=n_outs         )   self.params.extend(self.logLayer.params) self.finetune_cost = self.logLayer.negative_log_likelihood(self.y) self.errors = self.logLayer.errors(self.y) This completes the architecture of our SdA. Next up, we need to generate the training functions used by the SdA class. Each function will have the minibatch index (index) as an argument, together with several other elements; corruption_level and learning_rate are enabled here so that we can adjust them (for example, gradually increase or decrease them) during training. Additionally, we identify variables that help identify where the batch starts and ends: batch_begin and batch_end, respectively. defpretraining_functions(self, train_set_x, batch_size): index = T.lscalar('index')  corruption_level = T.scalar('corruption')  learning_rate = T.scalar('lr')  batch_begin = index * batch_size batch_end = batch_begin + batch_size   pretrain_fns = [] fordAinself.dA_layers: cost, updates = dA.get_cost_updates(corruption_level, learning_rate) fn = theano.function( inputs=[ index, theano.Param(corruption_level, default=0.2), theano.Param(learning_rate, default=0.1)                 ], outputs=cost, updates=updates, givens={ self.x: train_set_x[batch_begin: batch_end]                 }             ) pretrain_fns.append(fn)   returnpretrain_fns The ability to dynamically adjust the learning rate particularly is very helpful and may be applied in one of two ways. Once a technique has begun to converge on an appropriate solution, it is very helpful to be able to reduce the learning rate. If you do not do this, you risk creating a situation in which the network oscillates between values located around the optimum, without ever hitting it. In some contexts, it can be helpful to tie the learning rate to the network's performance measure. If the error rate is high, it can make sense to make larger adjustments until the error rate begins to decrease! The pretraining function we've created takes the minibatch index and can optionally take the corruption level or learning rate. It performs one step of pretraining and outputs the cost value and vector of weight updates. In addition to pretraining, we need to build functions to support the fine-tuning stage, where the network is run iteratively over the validation and test data to optimize network parameters. The train_fn implements a single step of fine-tuning. The valid_score is a Python function that computes a validation score using the error measure produced by the SdA over validation data. Similarly, test_score computes the error score over test data. To get this process off the ground, we first need to set up training, validation, and test datasets. Each stage requires two datasets (set x and set y), containing the features and class labels, respectively. The required number of minibatches for validation and test is determined, and an index is created to track batch size (and provide a means of identifying at which entries a batch starts and ends). Training, validation, and testing occurs for each batch and afterward, both valid_score and test_score are calculated across all batches: defbuild_finetune_functions(self, datasets, batch_size, learning_rate):           (train_set_x, train_set_y) = datasets[0]         (valid_set_x, valid_set_y) = datasets[1]         (test_set_x, test_set_y) = datasets[2]   n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] n_valid_batches /= batch_size n_test_batches = test_set_x.get_value(borrow=True).shape[0] n_test_batches /= batch_size   index = T.lscalar('index')      gparams = T.grad(self.finetune_cost, self.params)     updates = [             (param, param - gparam * learning_rate) forparam, gparamin zip(self.params, gparams)         ]   train_fn = theano.function( inputs=[index], outputs=self.finetune_cost, updates=updates, givens={ self.x: train_set_x[ index * batch_size: (index + 1) * batch_size                 ], self.y: train_set_y[ index * batch_size: (index + 1) * batch_size                 ]             }, name='train'         )   test_score_i = theano.function(             [index], self.errors, givens={ self.x: test_set_x[ index * batch_size: (index + 1) * batch_size                 ], self.y: test_set_y[ index * batch_size: (index + 1) * batch_size                 ]             }, name='test'         )   valid_score_i = theano.function(             [index], self.errors, givens={ self.x: valid_set_x[ index * batch_size: (index + 1) * batch_size                 ], self.y: valid_set_y[ index * batch_size: (index + 1) * batch_size                 ]             }, name='valid'         )     defvalid_score(): return [valid_score_i(i) for i inxrange(n_valid_batches)]   deftest_score(): return [test_score_i(i) for i inxrange(n_test_batches)]   returntrain_fn, valid_score, test_score With the training functionality in place, the following code initiates our SdA: numpy_rng = numpy.random.RandomState(89677) print '... building the model' sda = SdA( numpy_rng=numpy_rng, n_ins=280, hidden_layers_sizes=[240, 170, 100], n_outs=5     ) It should be noted that, at this point, we should be trying an initial configuration of layer sizes to see how we do. In this case, the layer sizes used here are the product of some initial testing. As we discussed, training the SdA occurs in two stages. The first is a layer-wise pretraining process that loops over all of the SdA's layers. The second is a process of fine-tuning over validation and test data. To pretrain the SdA, we provide the required corruption levels to train each layer and iterate over the layers using our previously-defined pretraining_fns: print '... getting the pretraining functions' pretraining_fns = sda.pretraining_functions(train_set_x=train_set_x, batch_size=batch_size)   print '... pre-training the model' start_time = time.clock() corruption_levels = [.1, .2, .2] for i inxrange(sda.n_layers):   for epoch inxrange(pretraining_epochs):             c = [] forbatch_indexinxrange(n_train_batches): c.append(pretraining_fns[i](index=batch_index, corruption=corruption_levels[i], lr=pretrain_lr)) print 'Pre-training layer %i, epoch %d, cost ' % (i, epoch), printnumpy.mean(c)   end_time = time.clock()   print>>sys.stderr, ('The pretraining code for file ' + os.path.split(__file__)[1] + ' ran for %.2fm' % ((end_time - start_time) / 60.)) At this point, we're able to initialize our SdA class via calling the preceding code stored within this book's GitHub repository, MasteringMLWithPython/Chapter3/SdA.py. Assessing SdA performance The SdA will take a significant length of time to run. With 15 epochs per layer and each layer typically taking an average of 11 minutes, the network will run for around 500 minutes on a modern desktop system with GPU acceleration and a single-threaded GotoBLAS. On a system without GPU acceleration, the network will take substantially longer to train and it is recommended that you use the alternative, which runs over a significantly smaller input dataset, MasteringMLWithPython/Chapter3/SdA_no_blas.py. The results are of a high quality, with a validation error score of 3.22% and test error score of 3.14%. These results are particularly impressive given the ambiguous and sometimes challenging nature of natural language processing applications. It was noticeable that the network classified more correctly for the 1-star and 5-star rating cases than for the intermediate levels. This is largely due to the ambiguous nature of unpolarized or unemotional language. Part of the reason that this input data was classifiable well was via significant feature engineering. While time-consuming and sometimes problematic, we've seen that well-executed feature engineering combined with an optimized model can deliver an excellent level of accuracy. Summary In this article, we introduced the autoencoder, an effective dimensionality reduction technique with some unique applications. We focused on the theory behind the SdA, an extension of autoencoders whereby any numbers of autoencoders are stacked in a deep architecture. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Clustering Methods [article] Machine Learning Using Spark MLlib [article]
Read more
  • 0
  • 0
  • 5584

article-image-creating-classes
Packt
11 Jul 2016
17 min read
Save for later

Creating Classes

Packt
11 Jul 2016
17 min read
In this article by William Sherif and Stephen Whittle, authorsof the book Unreal Engine 4 Scripting with C ++ Cookbook, we will discuss about how to create C++ classes and structs that integrate well with the UE4 Blueprints Editor. These classes are graduated versions of the regular C++ classes, and are called UCLASS. (For more resources related to this topic, see here.) A UCLASS is just a C++ class with a whole lot of UE4 macro decoration on top. The macros generate additional C++ header code that enables integration with the UE4 Editor itself. Using UCLASS is a great practice. The UCLASS macro, if configured correctly, can possibly make your UCLASS Blueprintable. The advantage of making your UCLASS Blueprintable is that it can enable your custom C++ objects to have Blueprints visually-editable properties (UPROPERTY) with handy UI widgets such as text fields, sliders, and model selection boxes. You can also have functions (UFUNCTION) that are callable from within a Blueprints diagram. Both of these are shown in the following images: On the left, two UPROPERTY decorated class members (a UTexture reference and an FColor) show up for editing in a C++ class's Blueprint. On the right, a C++ function GetName marked as BlueprintCallable UFUNCTION shows up as callable from a Blueprints diagram. Code generated by the UCLASS macro will be located in a ClassName.generated.h file, which will be the last #include required in your UCLASS header file, ClassName.h. The following are the topics that we will cover in this article: Making a UCLASS – Deriving from UObject Creating a user-editable UPROPERTY Accessing a UPROPERTY from Blueprints Specifying a UCLASS as the type of a UPROPERTY Creating a Blueprint from your custom UCLASS Making a UCLASS – Deriving from UObject When coding with C++, you can have your own code that compiles and runs as native C++ code, with appropriate calls to new and delete to create and destroy your custom objects. Native C++ code is perfectly acceptable in your UE4 project as long as your new and delete calls are appropriately paired so that no leaks are present in your C++ code. You can, however, also declare custom C++ classes, which behave like UE4 classes, by declaring your custom C++ objects as UCLASS. UCLASS use UE4's Smart Pointers and memory management routines for allocation and deallocation according to Smart Pointer rules, can be loaded and read by the UE4 Editor, and can optionally be accessed from Blueprints. Note that when you use the UCLASS macro, your UCLASS object's creation and destruction must be completely managed by UE4: you must use ConstructObject to create an instance of your object (not the C++ native keyword new), and call UObject::ConditionalBeginDestroy() to destroy the object (not the C++ native keyword delete). Getting ready In this recipe, we will outline how to write a C++ class that uses the UCLASS macro to enable managed memory allocation and deallocation as well as to permit access from the UE4 Editor and Blueprints. You need a UE4 project into which you can add new code to use this recipe. How to do it... From your running project, select File | Add C++ Class inside the UE4 Editor. In the Add C++ Class dialog that appears, go to the upper-right side of the window, and tick the Show All Classes checkbox. Creating a UCLASS by choosing to derive from the Object parent class. UObject is the root of the UE4 hierarchy. You must tick the Show All Classes checkbox in the upper-right corner of this dialog for the Object class to appear in the list view. Select Object (top of the hierarchy) as the parent class to inherit from, and then click on Next. Note that although Object will be written in the dialog box, in your C++ code, the C++ class you will deriving from is actually UObject with a leading uppercased U. This is the naming convention of UE4: UCLASS deriving from UObject (on a branch other than Actor) must be named with a leading U. UCLASS deriving from Actor must be named with a leading A. C++ classes (that are not UCLASS) deriving from nothing do not have a naming convention, but can be named with a leading F (for example, FAssetData), if preferred. Direct derivatives of UObject will not be level placeable, even if it contains visual representation elements like UStaticMeshes. If you want to place your object inside a UE4 level, you must at least derive from the Actor class or beneath it in the inheritance hierarchy. This article's example code will not be placeable in the level, but you can create and use Blueprints based on the C++ classes that we write in this article in the UE4 Editor. Name your new Object derivative something appropriate for the object type that you are creating. I call mine UserProfile. This comes off as UUserObject in the naming of the class in the C++ file that UE4 generates to ensure that the UE4 conventions are followed (C++ UCLASS preceded with a leading U). We will use the C++ object that we've created to store the Name and Email of a user that plays our game. Go to Visual Studio, and ensure your class file has the following form: #pragma once #include "Object.h" // For deriving from UObject #include "UserProfile.generated.h" // Generated code // UCLASS macro options sets this C++ class to be // Blueprintable within the UE4 Editor UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() }; Compile and run your project. You can now use your custom UCLASS object inside Visual Studio, and inside the UE4 Editor. See the following recipes for more details on what you can do with it. How it works… UE4 generates and manages a significant amount of code for your custom UCLASS. This code is generated as a result of the use of the UE4 macros such as UPROPERTY, UFUNCTION, and the UCLASS macro itself. The generated code is put into UserProfile.generated.h. You must #include the UCLASSNAME.generated.h file with the UCLASSNAME.h file for compilation to succeed. Without including the UCLASSNAME.generated.h file, compilation would fail. The UCLASSNAME.generated.h file must be included as the last #include in the list of #include in UCLASSNAME.h. Right Wrong #pragma once   #include "Object.h" #include "Texture.h" // CORRECT: .generated.h last file #include "UserProfile.generated.h" #pragma once   #include "Object.h" #include "UserProfile.generated.h" // WRONG: NO INCLUDES AFTER // .GENERATED.H FILE #include "Texture.h" The error that occurs when a UCLASSNAME.generated.h file is not included last in a list of includes is as follows: >> #include found after .generated.h file - the .generated.h file should always be the last #include in a header There's more… There are a bunch of keywords that we want to discuss here, which modify the way a UCLASS behaves. A UCLASS can be marked as follows: Blueprintable: This means that you want to be able to construct a Blueprint from the Class Viewer inside the UE4 Editor (when you right-click, Create Blueprint Class… becomes available). Without the Blueprintable keyword, the Create Blueprint Class… option will not be available for your UCLASS, even if you can find it from within the Class Viewer and right-click on it. The Create Blueprint Class… option is only available if you specify Blueprintable in your UCLASS macro definition. If you do not specify Blueprintable, then the resultant UCLASS will not be Blueprintable. BlueprintType:  Using this keyword implies that the UCLASS is usable as a variable from another Blueprint. You can create Blueprint variables from the Variables group in the left-hand panel of any Blueprint's EventGraph. If NotBlueprintType is specified, then you cannot use this Blueprint variable type as a variable in a Blueprints diagram. Right-clicking the UCLASS name in the Class Viewer will not show Create Blueprint Class… in its context menu. Any UCLASS that have BlueprintType specified can be added as variables to your Blueprint class diagram's list of variables. You may be unsure whether to declare your C++ class as a UCLASS or not. It is really up to you. If you like smart pointers, you may find that UCLASS not only make for safer code, but also make the entire code base more coherent and more consistent. See also To add additional programmable UPROPERTY to the Blueprints diagrams, see the section on Creating a user-editable UPROPERTY, further in the article. Creating a user-editable UPROPERTY Each UCLASS that you declare can have any number of UPROPERTY declared for it within it. Each UPROPERTY can be a visually editable field, or some Blueprints accessible data member of the UCLASS. There are a number of qualifiers that we can add to each UPROPERTY, which change the way it behaves from within the UE4 Editor, such as EditAnywhere (screens from which the UPROPERTY can be changed), and BlueprintReadWrite (specifying that Blueprints can both read and write the variable at any time in addition to the C++ code being allowed to do so). Getting ready To use this recipe, you should have a C++ project into which you can add C++ code. In addition, you should have completed the preceding recipe, Making a UCLASS – Deriving from UObject. How to do it... Add members to your UCLASS declaration as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float Armor; UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float HpMax; }; Create a Blueprint of your UObject class derivative, and open the Blueprint in the UE4 editor by double-clicking it from the object browser. You can now specify values in Blueprints for the default values of these new UPROPERTY fields. Specify per-instance values by dragging and dropping a few instances of the Blueprint class into your level, and editing the values on the object placed (by double-clicking on them). How it works… The parameters passed to the UPROPERTY() macro specify a couple of important pieces of information regarding the variable. In the preceding example, we specified the following: EditAnywhere: This means that the UPROPERTY() macro can be edited either directly from the Blueprint, or on each instance of the UClass object as placed in the game level. Contrast this with the following: EditDefaultsOnly: The Blueprint's value is editable, but it is not editable on a per-instance basis EditInstanceOnly: This would allow editing of the UPROPERTY() macro in the game-level instances of the UClass object, and not on the base blueprint itself BlueprintReadWrite: This indicates that the property is both readable and writeable from Blueprints diagrams. UPROPERTY() with BlueprintReadWrite must be public members, otherwise compilation will fail. Contrast this with the following: BlueprintReadOnly: The property must be set from C++ and cannot be changed from Blueprints Category: You should always specify a Category for your UPROPERTY(). The Category determines which submenu the UPROPERTY() will appear under in the property editor. All UPROPERTY() specified under Category=Stats will appear in the same Stats area in the Blueprints editor. See also A complete UPROPERTY listing is located at https://docs.unrealengine.com/latest/INT/Programming/UnrealArchitecture/Reference/Properties/Specifiers/index.html. Accessing a UPROPERTY from Blueprints Accessing a UPROPERTY from Blueprints is fairly simple. The member must be exposed as a UPROPERTY on the member variable that you want to access from your Blueprints diagram. You must qualify the UPROPERTY in your macro declaration as being either BlueprintReadOnly or BlueprintReadWrite to specify whether you want the variable to be either readable (only) from Blueprints, or even writeable from Blueprints. You can also use the special value BlueprintDefaultsOnly to indicate that you only want the default value (before the game starts) to be editable from the Blueprints editor. BlueprintDefaultsOnly indicates the data member cannot be edited from Blueprints at runtime. How to do it... Create some UObject-derivative class, specifying both Blueprintable and BlueprintType, such as the following: UCLASS( Blueprintable, BlueprintType ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) FString Name; }; The BlueprintType declaration in the UCLASS macro is required to use the UCLASS as a type within a Blueprints diagram. Within the UE4 Editor, derive a Blueprint class from the C++ class, as shown in Creating a Blueprint from your custom UCLASS. Create an instance of your Blueprint-derived class in the UE4 Editor by dragging an instance from the Content Browser into the main game world area. It should appear as a round white sphere in the game world unless you've specified a model mesh for it. In a Blueprints diagram which allows function calls (such as the Level Blueprint, accessible via Blueprints | Open Level Blueprint), try printing the Name property of your Warrior instance, as seen in the following screenshot: Navigating Blueprints diagrams is easy. Right-Click + Drag to pan a Blueprints diagram; Alt + Right-Click + Drag to zoom. How it works… UPROPERTY are automatically written Get/Set methods for UE4 classes. They must not be declared as private variables within the UCLASS, however. If they are not declared as public or protected members, you will get a compiler error of the form: >> BlueprintReadWrite should not be used on private members Specifying a UCLASS as the type of a UPROPERTY So, you've constructed some custom UCLASS intended for use inside of UE4. But how do you instantiate them? Objects in UE4 are reference-counted and memory-managed, so you should not allocate them directly using the C++ keyword new. Instead, you'll have to use a function called ConstructObject to instantiate your UObject derivative. ConstructObject doesn't just take the C++ class of the object you are creating, it also requires a Blueprint class derivative of the C++ class (a UClass* reference). A UClass* reference is just a pointer to a Blueprint. How do we instantiate an instance of a particular Blueprint from C++ code? C++ code does not, and should not, know concrete UCLASS names, since these names are created and edited in the UE4 Editor, which you can only access after compilation. We need a way to somehow hand back the Blueprint class name to instantiate to the C++ code. The way we do this is by having the UE4 programmer select the UClass that the C++ code is to use from a simple dropdown menu listing all the Blueprints available (derived from a particular C++ class) inside the UE4 editor. To do this, we simply have to provide a user-editable UPROPERTY with a TSubclassOf<C++ClassName>-typed variable. Alternatively, you can use FStringClassReference to achieve the same objective. This makes selecting the UCLASS in the C++ code is exactly like selecting a Texture to use. UCLASS should be considered as resources to the C++ code, and their names should never be hard-coded into the code base. Getting ready In your UE4 code, you're often going to need to refer to different UCLASS in the project. For example, say you need to know the UCLASS of the player object so that you can use SpawnObject in your code on it. Specifying a UCLASS from C++ code is extremely awkward, because the C++ code is not supposed to know about the concrete instances of the derived UCLASS that were created in the Blueprints editor at all. Just as we don't want to bake specific asset names into the C++ code, we don't want to hard-code derived Blueprints class names into the C++ code. So, we use a C++ variable (for example, UClassOfPlayer), and select that from a Blueprints dialog in the UE4 editor. You can do so using a TSubclassOf member or an FStringClassReference member, as shown in the following screenshot: How to do it... Navigate to the C++ class that you'd like to add the UCLASS reference member to. For example, decking out a class derivative with the UCLASS of the player is fairly easy. From inside a UCLASS, use code of the following form to declare a UPROPERTY that allows selection of a UClass (Blueprint class) that derives from UObject in the hierarchy: UCLASS() class CHAPTER2_API UUserProfile : public UObject { UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Unit) TSubclassOf<UObject> UClassOfPlayer; // Displays any UClasses // deriving from UObject in a dropdown menu in Blueprints // Displays string names of UCLASSes that derive from // the GameMode C++ base class UPROPERTY( EditAnywhere, meta=(MetaClass="GameMode"), Category = Unit ) FStringClassReference UClassGameMode; }; Blueprint the C++ class, and then open that Blueprint. Click on the drop-down menu beside your UClassOfPlayer menu. Select the appropriate UClassOfPlayer member from the drop-down menu of the listed UClass. How it works… TSubclassOf The TSubclassOf< > member will allow you to specify a UClass name using a drop-down menu inside of the UE4 editor when editing any Blueprints that have TSubclassOf< > members. FStringClassReference The MetaClass tag refers to the base C++ class from which you expect the UClassName to derive. This limits the drop-down menu's contents to only the Blueprints derived from that C++ class. You can leave the MetaClass tag out if you wish to display all the Blueprints in the project. Creating a Blueprint from your custom UCLASS Blueprinting is just the process of deriving a Blueprint class for your C++ object. Creating Blueprint-derived classes from your UE4 objects allows you to edit the custom UPROPERTY visually inside the editor. This avoids hardcoding any resources into your C++ code. In addition, in order for your C++ class to be placeable within the level, it must be Blueprinted first. But this is only possible if the C++ class underlying the Blueprint is an Actor class-derivative. There is a way to load resources (like textures) using FStringAssetReferences and StaticLoadObject. These pathways to loading resources (by hardcoding path strings into your C++ code) are generally discouraged, however. Providing an editable value in a UPROPERTY(), and loading from a proper concretely typed asset reference is a much better practice. Getting ready You need to have a constructed UCLASS that you'd like to derive a Blueprint class from (see the section on Making a UCLASS – Deriving from UObject given earlier in this article) in order to follow this recipe. You must have also marked your UCLASS as Blueprintable in the UCLASS macro for Blueprinting to be possible inside the engine. Any UObject-derived class with the meta keyword Blueprintable in the UCLASS macro declaration will be Blueprintable. How to do it… To Blueprint your UserProfile class, first ensure that UCLASS has the Blueprintable tag in the UCLASS macro. This should look as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject Compile and run your code. Find the UserProfile C++ class in the Class Viewer (Window | Developer Tools | Class Viewer). Since the previously created UCLASS does not derive from Actor, to find your custom UCLASS, you must turn off Filters | Actors Only in the Class Viewer (which is checked by default). Turn off the Actors Only check mark to display all the classes in the Class Viewer. If you don't do this, then your custom C++ class may not show! Keep in mind that you can use the small search box inside the Class Viewer to easily find the UserProfile class by starting to type it in. Find your UserProfile class in the Class Viewer, right-click on it, and create a Blueprint from it by selecting Create Blueprint… Name your Blueprint. Some prefer to prefix the Blueprint class name with BP_. You may choose to follow this convention or not, just be sure to be consistent. Double-click on your new Blueprint as it appears in the Content Browser, and take a look at it. You will be able to edit the Name and Email fields for each UserProfile Blueprint instance you create. How it works… Any C++ class you create that has the Blueprintable tag in its UCLASS macro can be Blueprinted within the UE4 editor. A blueprint allows you to customize properties on the C++ class in the visual GUI interface of UE4. Summary The UE4 code is, typically, very easy to write and manage once you know the patterns. The code we write to derive from another UCLASS, or to create a UPROPERTY or UFUNCTION is very consistent. This article provides recipes for common UE4 coding tasks revolving around basic UCLASS derivation, property and reference declaration, construction, destruction, and general functionality. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4[article] An overview of Unreal Engine[article] Overview of Unreal Engine 4[article]
Read more
  • 0
  • 0
  • 16805
Packt
11 Jul 2016
10 min read
Save for later

Mining Twitter with Python – Influence and Engagement

Packt
11 Jul 2016
10 min read
In this article by Marco Bonzanini, author of the book Mastering Social Media Mining with Python, we will discussmining Twitter data. Here, we will analyze users, their connections, and their interactions. In this article, we will discuss how to measure influence and engagement on Twitter. (For more resources related to this topic, see here.) Measuring influence and engagement One of the most commonly mentioned characters in the social media arena is the mythical influencer. This figure is responsible for a paradigm shift in the recent marketing strategies (https://en.wikipedia.org/wiki/Influencer_marketing), which focus on targeting key individuals rather than the market as a whole. Influencers are typically active users within their community.In case of Twitter, an influencer tweets a lot about topics they care about. Influencers are well connected as they follow and are followed by many other users who are also involved in the community. In general, an influencer is also regarded as an expert in their area, and is typically trusted by other users. This description should explain why influencers are an important part of recent trends in marketing: an influencer can increase awareness or even become an advocate of a specific product or brand and can reach a vast number of supporters. Whether your main interest is Python programming or wine tasting, regardless how huge (or tiny) your social network is, you probably already have an idea who the influencers in your social circles are: a friend, acquaintance, or random stranger on the Internet whose opinion you trust and value because of their expertise on the given subject. A different, but somehow related, concept is engagement. User engagement, or customer engagement, is the assessment of the response to a particular offer, such as a product or service. In the context of social media, pieces of content are often created with the purpose to drive traffic towards the company website or e-commerce. Measuring engagement is important as it helps in defining and understanding strategies to maximize the interactions with your network, and ultimately bring business. On Twitter, users engage by the means of retweeting or liking a particular tweet, which in return, provides more visibility to the tweet itself. In this section, we'll discuss some interesting aspects of social media analysis regarding the possibility of measuring influence and engagement. On Twitter, a natural thought would be to associate influence with the number of users in a particular network. Intuitively, a high number of followers means that a user can reach more people, but it doesn't tell us how a tweet is perceived. The following script compares some statistics for two user profiles: import sys import json   def usage():   print("Usage:")   print("python {} <username1><username2>".format(sys.argv[0]))   if __name__ == '__main__':   if len(sys.argv) != 3:     usage()     sys.exit(1)   screen_name1 = sys.argv[1]   screen_name2 = sys.argv[2] After reading the two screen names from the command line, we will build up a list of followersfor each of them, including their number of followers to calculate the number of reachable users: followers_file1 = 'users/{}/followers.jsonl'.format(screen_name1)   followers_file2 = 'users/{}/followers.jsonl'.format(screen_name2)   with open(followers_file1) as f1, open(followers_file2) as f2:     reach1 = []     reach2 = []     for line in f1:       profile = json.loads(line)       reach1.append((profile['screen_name'], profile['followers_count']))     for line in f2:       profile = json.loads(line)       reach2.append((profile['screen_name'],profile['followers_count'])) We will then load some basic statistics (followers and statuses count) from the two user profiles: profile_file1 = 'users/{}/user_profile.json'.format(screen_name1)   profile_file2 = 'users/{}/user_profile.json'.format(screen_name2)   with open(profile_file1) as f1, open(profile_file2) as f2:     profile1 = json.load(f1)     profile2 = json.load(f2)     followers1 = profile1['followers_count']     followers2 = profile2['followers_count']     tweets1 = profile1['statuses_count']     tweets2 = profile2['statuses_count']     sum_reach1 = sum([x[1] for x in reach1])   sum_reach2 = sum([x[1] for x in reach2])   avg_followers1 = round(sum_reach1 / followers1, 2)   avg_followers2 = round(sum_reach2 / followers2, 2) We will also load the timelines for the two users, in particular, to observe the number of times their tweets have been favorited or retweeted: timeline_file1 = 'user_timeline_{}.jsonl'.format(screen_name1)   timeline_file2 = 'user_timeline_{}.jsonl'.format(screen_name2)   with open(timeline_file1) as f1, open(timeline_file2) as f2:     favorite_count1, retweet_count1 = [], []     favorite_count2, retweet_count2 = [], []     for line in f1:       tweet = json.loads(line)       favorite_count1.append(tweet['favorite_count'])       retweet_count1.append(tweet['retweet_count'])     for line in f2:       tweet = json.loads(line)       favorite_count2.append(tweet['favorite_count'])       retweet_count2.append(tweet['retweet_count']) The preceding numbers are then aggregated into average number of favorites and average number of retweets, both in absolute terms and per number of followers: avg_favorite1 = round(sum(favorite_count1) / tweets1, 2)   avg_favorite2 = round(sum(favorite_count2) / tweets2, 2)   avg_retweet1 = round(sum(retweet_count1) / tweets1, 2)   avg_retweet2 = round(sum(retweet_count2) / tweets2, 2)   favorite_per_user1 = round(sum(favorite_count1) / followers1, 2)   favorite_per_user2 = round(sum(favorite_count2) / followers2, 2)   retweet_per_user1 = round(sum(retweet_count1) / followers1, 2)   retweet_per_user2 = round(sum(retweet_count2) / followers2, 2)   print("----- Stats {} -----".format(screen_name1))   print("{} followers".format(followers1))   print("{} users reached by 1-degree connections".format(sum_reach1))   print("Average number of followers for {}'s followers: {}".format(screen_name1, avg_followers1))   print("Favorited {} times ({} per tweet, {} per user)".format(sum(favorite_count1), avg_favorite1, favorite_per_user1))   print("Retweeted {} times ({} per tweet, {} per user)".format(sum(retweet_count1), avg_retweet1, retweet_per_user1))   print("----- Stats {} -----".format(screen_name2))   print("{} followers".format(followers2))   print("{} users reached by 1-degree connections".format(sum_reach2))   print("Average number of followers for {}'s followers: {}".format(screen_name2, avg_followers2))   print("Favorited {} times ({} per tweet, {} per user)".format(sum(favorite_count2), avg_favorite2, favorite_per_user2))   print("Retweeted {} times ({} per tweet, {} per user)".format(sum(retweet_count2), avg_retweet2, retweet_per_user2)) This script takes two arguments from the command line and assumes that the data has already been downloaded. In particular, for both users, we need the data about followers and the respective user timelines. The script is somehow verbose, because it computes the same operations for two profiles and prints everything on the terminal. We can break it down into different parts. Firstly, we will look into the followers' followers. This will provide some information related to the part of the network immediately connected to the given user. In other words, it should answer the question how many users can I reach if all my followers retweet me? We can achieve this by reading the users/<user>/followers.jsonl file and keeping a list of tuples, where each tuple represents one of the followers and is in the (screen_name, followers_count)form. Keeping the screen name at this stage is useful in case we want to observe who the users with the highest number of followers are (not computed in the script, but easy to produce using sorted()). In the second step, we will read the user profile from the users/<user>/user_profile.jsonfile so that we can get information about the total number of followers and the total number of tweets. With the data collected so far, we can compute the total number of users who are reachable within a degree of separation (follower of a follower) and the average number of followers of a follower. This is achieved via the following lines: sum_reach1 = sum([x[1] for x in reach1]) avg_followers1 = round(sum_reach1 / followers1, 2) The first one uses a list comprehension to iterate through the list of tuples mentioned previously, while the second one is a simple arithmetic average, rounded to two decimal points. The third part of the script reads the user timeline from the user_timeline_<user>.jsonlfile and collects information about the number of retweets and favorite for each tweet. Putting everything together allows us to calculate how many times a user has been retweeted or favorited and what is the average number of retweet/favorite per tweet and follower. To provide an example, I'll perform some vanity analysis and compare my account,@marcobonzanini, with Packt Publishing: $ python twitter_influence.py marcobonzanini PacktPub The script produces the following output: ----- Stats marcobonzanini ----- 282 followers 1411136 users reached by 1-degree connections Average number of followers for marcobonzanini's followers: 5004.03 Favorited 268 times (1.47 per tweet, 0.95 per user) Retweeted 912 times (5.01 per tweet, 3.23 per user) ----- Stats PacktPub ----- 10209 followers 29961760 users reached by 1-degree connections Average number of followers for PacktPub's followers: 2934.84 Favorited 3554 times (0.33 per tweet, 0.35 per user) Retweeted 6434 times (0.6 per tweet, 0.63 per user) As you can see, the raw number of followers shows no contest, with Packt Publishing having approximatively 35 times more followers than me. The interesting part of this analysis comes up when we compare the average number of retweets and favorites, apparently my followers are much more engaged with my content than PacktPub's. Is this enough to declare than I'm an influencer while PacktPub is not? Clearly not. What we observe here is a natural consequence of the fact that my tweets are probably more focused on specific topics (Python and data science), hence my followers are already more interested in what I'm publishing. On the other side, the content produced by Packt Publishing is highly diverse as it ranges across many different technologies. This diversity is also reflected in PacktPub's followers, who include developers, designers, scientists, system administrator, and so on. For this reason, each of PacktPub's tweet is found interesting (that is worth retweeting) by a smaller proportion of their followers. Summary In this article,we discussed mining data from Twitter by focusing on the analysis of user connections and interactions. In particular, we discussed how to compare influence and engagement between users. For more information on social media mining, refer the following books by Packt Publishing: Social Media Mining with R: https://www.packtpub.com/big-data-and-business-intelligence/social-media-mining-r Mastering Social Media Mining with R: https://www.packtpub.com/big-data-and-business-intelligence/mastering-social-media-mining-r Further resources on this subject: Probabilistic Graphical Models in R [article] Machine Learning Tasks [article] Support Vector Machines as a Classification Engine [article]
Read more
  • 0
  • 0
  • 3721

article-image-clustered-container-deployments-docker-swarm-part-4
Darwin Corn
11 Jul 2016
6 min read
Save for later

Clustered Container Deployments with Docker Swarm, Part 4

Darwin Corn
11 Jul 2016
6 min read
Welcome to Part 4 of a series that started with creating your very first Docker container and has gradually climbed the learning curve from deploying that container in the cloud to dealing with multiple container applications . If you haven't started from the beginning of this series, I highly suggest you at least read the posts—if not follow along with the tutorial—to give context to this fourth installation in the series. Let’s start from where we left in Part 3, with a containerized Taiga instance. We want to deploy it in a more robust setup than a few containers running on a single host, and Docker has a couple of tools that will let us do this in concert with Compose, which was covered in Part 3. I've updated the docker-taiga repo with a swarm branch, so go ahead and run a git pull if you've come here from Part 3. Running git checkout swarm will switch you to that branch and that'll get you ready to follow along with the rest of the post, which will really just break down the deploy.sh shell script in the root of the application. If you're impatient, have Virtualbox and Docker Machine installed, and have at least 4 GB of available RAM, go ahead and kick off the script to create a highly available cluster hosting the Taiga application on your very own machine. Of course, you're more likely deploying this on a server or the cloud and you'll need to modify the script to tell Docker Machine to use the driver for your virtualization platform of choice. The two tools mentioned at the beginning of this series are Machine and Swarm, and we'll look at them independently before diving into how they can be used in concert with Compose to automate a clustered application deployment. Docker Machine Despite the fact that I wrote the previous two posts with the unwritten assumption that you were following along on a Linux box, the market share and target audience dictates that you should be familiar with Docker Machine, since you've used it to install the Docker daemon on your Mac or PC running Windows. If you're running Linux, installing it is as easy as installing Compose was in Part 3: # curl -L https://github.com/docker/machine/releases/download/v0.6.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && chmod +x /usr/local/bin/docker-machine Docker Machine is far more useful than just giving you a local Virtualbox VM to run the Docker daemon on! It's a full-fledged provisioning tool. In fact, you can provision a full Swarm cluster with Machine, fully automating your application deployments. Let's first take a quick look at Swarm before we get back to harnessing the power of Machine to automate our Swarm deployments. Docker Swarm Docker Swarm takes a group of 'nodes' (VMs) and clusters them so that they behave like a single Docker host. There's a lot of manual setup involved, including installing the Docker Engine on each node, opening a port on each node, and installing TLS certs to secure communication between the said nodes. The Swarm application itself is built to run in its own container, and creating a 'cluster token' to connect a new swarm cluster is as easy as running docker run swarm create. At this point, if you're smart, you've probably read this and said to yourself, "Now I have to learn some sort of configuration management software? I was just reading through this series to figure out how I could make my development life easier with Docker! I leave the Puppet/Chef/Ansible stuff to the sysadmins." Don't worry; Docker has your back. You can provision a Swarm with Machine! Docker Machine + Swarm To be fair, this isn't as feature-rich or configurable as a true configuration management solution. The gist of using Machine is that you can use the --swarm flag in concert with --swarm-master and --swarm-discovery token://SWARM_CLUSTER_TOKEN and a unique HOST_NODE_NAME to automatically provision a Swarm Master with docker-machine create. You can do the same thing less the --swarm-master flag to provision the cluster nodes. Then you can use docker-machine env with --swarm HOST_NODE_NAME and a few environmental variables to tell the node where to get the TLS cert from and where to look for the Swarm Master. Holistic Docker This is largely experimental at this point. If you're looking to do this in production with more than the most basic of tiered applications, stick around for the post on CoreOS and Kubernetes. If you absolutely love Docker, then you shouldn't have to wait long for that first sentence to be wrong. The basic workflow for a wholly-Docker deployment looks like this (and is demonstrated in deploy.sh): Containerize the layers of your application with Dockerfiles. Use those to build the images and push them to a registry. Provision a Swarm cluster using Docker Machine. Use Compose to deploy the containerized application to the cluster. Important considerations (as of this writing): Don't use the Swarm cluster to build the container images from the Dockerfiles. Have it pull a pre-built image from a registry. This means your compose file shouldn't have a single 'build' entry. The updated docker-compose.yml does this by pulling the Taiga images from Docker Hub, but your own private application containers will need a private registry, either on Docker Hub or Google Cloud Platform (as demonstrated in Part 2) or elsewhere. Manually schedule services with multiple dependencies to ensure that such a service will have them all living on the same node. And explicitly map your volumes to ensure that you don't get 'port clash'. Once again, a number of caveats for considerations that were beyond the scope of this post, but are necessary to a production Taiga deployment—the three caveats mentioned in Part 3 apply here as well. As mentioned in the beginning of this post, if you want to use that shell script for anything beyond testing, you'll need to configure the Machine driver to use something other than Virtualbox. If you've stuck around thus far, stay tuned for the final part to this speed-climb up the containerization learning curve where I discuss the non-Docker deployment options. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 3944

article-image-auditing-mobile-applications
Packt
08 Jul 2016
48 min read
Save for later

Auditing Mobile Applications

Packt
08 Jul 2016
48 min read
In this article by Prashant Verma and Akshay Dikshit, author of the book Mobile Device Exploitation Cookbook we will cover the following topics: Auditing Android apps using static analysis Auditing Android apps using a dynamic analyzer Using Drozer to find vulnerabilities in Android applications Auditing iOS application using static analysis Auditing iOS application using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Finding vulnerabilities in WAP-based mobile apps Finding client-side injection Insecure encryption in mobile apps Discovering data leakage sources Other application-based attacks in mobile devices Launching intent injection in Android (For more resources related to this topic, see here.) Mobile applications such as web applications may have vulnerabilities. These vulnerabilities in most cases are the result of bad programming practices or insecure coding techniques, or may be because of purposefully injected bad code. For users and organizations, it is important to know how vulnerable their applications are. Should they fix the vulnerabilities or keep/stop using the applications? To address this dilemma, mobile applications need to be audited with the goal of uncovering vulnerabilities. Mobile applications (Android, iOS, or other platforms) can be analyzed using static or dynamic techniques. Static analysis is conducted by employing certain text or string based searches across decompiled source code. Dynamic analysis is conducted at runtime and vulnerabilities are uncovered in simulated fashion. Dynamic analysis is difficult as compared to static analysis. In this article, we will employ both static and dynamic analysis to audit Android and iOS applications. We will also learn various other techniques to audit findings, including Drozer framework usage, WAP-based application audits, and typical mobile-specific vulnerability discovery. Auditing Android apps using static analysis Static analysis is the mostcommonly and easily applied analysis method in source code audits. Static by definition means something that is constant. Static analysis is conducted on the static code, that is, raw or decompiled source code or on the compiled (object) code, but the analysis is conducted without the runtime. In most cases, static analysis becomes code analysis via static string searches. A very common scenario is to figure out vulnerable or insecure code patterns and find the same in the entire application code. Getting ready For conducting static analysis of Android applications, we at least need one Android application and a static code scanner. Pick up any Android application of your choice and use any static analyzer tool of your choice. In this recipe, we use Insecure Bank, which is a vulnerable Android application for Android security enthusiasts. We will also use ScriptDroid, which is a static analysis script. Both Insecure Bank and ScriptDroid are coded by Android security researcher, Dinesh Shetty. How to do it... Perform the following steps: Download the latest version of the Insecure Bank application from GitHub. Decompress or unzip the .apk file and note the path of the unzipped application. Create a ScriptDroid.bat file by using the following code: @ECHO OFF SET /P Filelocation=Please Enter Location: mkdir %Filelocation%OUTPUT :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.java" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt :: Code to check for insecure usage of SharedPreferences grep -H -i -n -C2 -e "putString" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" grep -H -i -n -C2 -e "MODE_PRIVATE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTModeprivate.txt" grep -H -i -n -C2 -e "MODE_WORLD_READABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldreadable.txt" grep -H -i -n -C2 -e "MODE_WORLD_WRITEABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldwritable.txt" grep -H -i -n -C2 -e "addPreferencesFromResource" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" :: Code to check for possible TapJacking attack grep -H -i -n -e filterTouchesWhenObscured="true" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" grep -H -i -n -e "<Button" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTtapjackings.txt" grep -H -i -n -v filterTouchesWhenObscured="true" "%Filelocation%OUTPUTtapjackings.txt" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" del %Filelocation%OUTPUTTemp_tapjacking.txt :: Code to check usage of external storage card for storing information grep -H -i -n -e "WRITE_EXTERNAL_STORAGE" "%Filelocation%........AndroidManifest.xml" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "getExternalStorageDirectory()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "sdcard" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "addJavascriptInterface()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -e "setJavaScriptEnabled(true)" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_probableXss.txt" >> "%Filelocation%OUTPUTprobableXss.txt" del %Filelocation%OUTPUTTemp_probableXss.txt :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_weakencryption.txt" >> "%Filelocation%OUTPUTWeakencryption.txt" del %Filelocation%OUTPUTTemp_weakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -C3 "http://" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "HttpURLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "URLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -C3 -e "URL" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -e "TrustAllSSLSocket-Factory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "AllTrustSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "NonValidatingSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" >> "%Filelocation%OUTPUTOtherUrlConnections.txt" del %Filelocation%OUTPUTTemp_OtherUrlConnection.txt grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_overhttp.txt" >> "%Filelocation%OUTPUTUnencryptedTransport.txt" del %Filelocation%OUTPUTTemp_overhttp.txt :: Code to check for Autocomplete ON grep -H -i -n -e "<Input" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_autocomp.txt" grep -H -i -n -v "textNoSuggestions" "%Filelocation%OUTPUTTemp_autocomp.txt" >> "%Filelocation%OUTPUTAutocompleteOn.txt" del %Filelocation%OUTPUTTemp_autocomp.txt :: Code to presence of possible SQL Content grep -H -i -n -e "rawQuery" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "compileStatement" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "db" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "sqlite" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "database" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "insert" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "delete" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "select" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "table" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "cursor" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_sqlcontent.txt" >> "%Filelocation%OUTPUTSqlcontents.txt" del %Filelocation%OUTPUTTemp_sqlcontent.txt :: Code to check for Logging mechanism grep -H -i -n -F "Log." "%Filelocation%*.java" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for Information in Toast messages grep -H -i -n -e "Toast.makeText" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Toast.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Toast.txt" >> "%Filelocation%OUTPUTToast_content.txt" del %Filelocation%OUTPUTTemp_Toast.txt :: Code to check for Debugging status grep -H -i -n -e "android:debuggable" "%Filelocation%*.java" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "getLastKnownLocation()|requestLocationUpdates()|getLatitude()|getLongitude() |LOCATION" "%Filelocation%*.java" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for possible Intent Injection grep -H -i -n -C3 -e "Action.getIntent(" "%Filelocation%*.java" >> "%Filelocation%OUTPUTIntentValidation.txt" How it works... Go to the command prompt and navigate to the path where ScriptDroid is placed. Run the .bat file and it prompts you to input the path of the application for which you wish toperform static analysis. In our case we provide it with the path of the Insecure Bank application, precisely the path where Java files are stored. If everything worked correctly, the screen should look like the following: The script generates a folder by the name OUTPUT in the path where the Java files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The combination ofScriptDroid and Insecure Bank gives a very nice view of various Android vulnerabilities; usually the same is not possible with live apps. Consider the following points, for instance: Weakencryption.txt has listed down the instances of Base64 encoding used for passwords in the Insecure Bank application Logging.txt contains the list of insecure log functions used in the application SdcardStorage.txt contains the code snippet pertaining to the definitions related to data storage in SD Cards Details like these from static analysis are eye-openers in letting us know of the vulnerabilities in our application, without even running the application. There's more... Thecurrent recipe used just ScriptDroid, but there are many other options available. You can either choose to write your own script or you may use one of the free orcommercial tools. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also https://github.com/dineshshetty/Android-InsecureBankv2 Auditing iOS application using static analysis Auditing Android apps a using a dynamic analyzer Dynamic analysis isanother technique applied in source code audits. Dynamic analysis is conducted in runtime. The application is run or simulated and the flaws or vulnerabilities are discovered while the application is running. Dynamic analysis can be tricky, especially in the case of mobile platforms. As opposed to static analysis, there are certain requirements in dynamic analysis, such as the analyzer environment needs to be runtime or a simulation of the real runtime.Dynamic analysis can be employed to find vulnerabilities in Android applications which aredifficult to find via static analysis. A static analysis may let you know a password is going to be stored, but dynamic analysis reads the memory and reveals the password stored in runtime. Dynamic analysis can be helpful in tampering data in transmission during runtime that is, tampering with the amount in a transaction request being sent to the payment gateway. Some Android applications employ obfuscation to prevent attackers reading the code; Dynamic analysis changes the whole game in such cases, by revealing the hardcoded data being sent out in requests, which is otherwise not readable in static analysis. Getting ready For conducting dynamic analysis of Android applications, we at least need one Android application and a dynamic code analyzer tool. Pick up any Android application of your choice and use any dynamic analyzer tool of your choice. The dynamic analyzer tools can be classified under two categories: The tools which run from computers and connect to an Android device or emulator (to conduct dynamic analysis) The tools that can run on the Android device itself For this recipe, we choose a tool belonging to the latter category. How to do it... Perform the following steps for conducting dynamic analysis: Have an Android device with applications (to be analyzed dynamically) installed. Go to the Play Store and download Andrubis. Andrubis is a tool from iSecLabs which runs on Android devices and conducts static, dynamic, and URL analysis on the installed applications. We will use it for dynamic analysis only in this recipe. Open the Andrubis application on your Android device. It displays the applications installed on the Android device and analyzes these applications. How it works... Open the analysis of the application of your interest. Andrubis computes an overall malice score (out of 10) for the applications and gives the color icon in front of its main screen to reflect the vulnerable application. We selected anorange coloredapplication to make more sense with this recipe. This is how the application summary and score is shown in Andrubis: Let us navigate to the Dynamic Analysis tab and check the results: The results are interesting for this application. Notice that all the files going to be written by the application under dynamic analysis are listed down. In our case, one preferences.xml is located. Though the fact that the application is going to create a preferences file could have been found in static analysis as well, additionally, dynamic analysis confirmed that such a file is indeed created. It also confirms that the code snippet found in static analysis about the creation of a preferences file is not a dormant code but a file that is going to be created. Further, go ahead and read the created file and find any sensitive data present there. Who knows, luck may strike and give you a key to hidden treasure. Notice that the first screen has a hyperlink, View full report in browser. Tap on it and notice that the detailed dynamic analysis is presented for your further analysis. This also lets you understand what the tool tried and what response it got. This is shown in the following screenshot: There's more... The current recipe used a dynamic analyzer belonging to the latter category. There are many other tools available in the former category. Since this is an Android platform, many of them are open source tools. DroidBox can be tried for dynamic analysis. It looks for file operations (read/write), network data traffic, SMS, permissions, broadcast receivers, and so on, among other checks.Hooker is another tool that can intercept and modify API calls initiated from the application. This is veryuseful indynamic analysis. Try hooking and tampering with data in API calls. See also https://play.google.com/store/apps/details?id=org.iseclab.andrubis https://code.google.com/p/droidbox/ https://github.com/AndroidHooker/hooker Using Drozer to find vulnerabilities in Android applications Drozer is a mobile security audit and attack framework, maintained by MWR InfoSecurity. It is a must-have tool in the tester's armory. Drozer (Android installed application) interacts with other Android applications via IPC (Inter Process Communication). It allows fingerprinting of application package-related information, its attack surface, and attempts to exploit those. Drozer is an attack framework and advanced level exploits can be conducted from it. We use Drozer to find vulnerabilities in our applications. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and follow the installation instructions mentioned in the user guide. Install Drozer console agent and start a session as mentioned in the User Guide. If your installation is correct, you should get Drozer command prompt (dz>). You should also have a few vulnerable applications as well to analyze. Here we chose OWASP GoatDroid application. How to do it... Every pentest starts with fingerprinting. Let us useDrozer for the same. The Drozer User Guide is very helpful for referring to the commands. The following command can be used to obtain information about anAndroid application package: run app.package.info -a <package name> We used the same to extract the information from the GoatDroid application and found the following results: Notice that apart from the general information about the application, User Permissions are also listed by Drozer. Further, let us analyze the attack surface. Drozer's attack surface lists the exposed activities, broadcast receivers, content providers, and services. The in-genuinely exposed ones may be a critical security risk and may provide you access to privileged content. Drozer has the following command to analyze the attack surface: run app.package.attacksurface <package name> We used the same to obtain the attack surface of the Herd Financial application of GoatDroid and the results can be seen in the following screenshot. Notice that one Activity and one Content Provider are exposed. We chose to attack the content provider to obtain the data stored locally. We used the followingDrozer command to analyze the content provider of the same application: run app.provider.info -a <package name> This gave us the details of the exposed content provider, which we used in another Drozer command: run scanner.provider.finduris -a <package name> We could successfully query the content providers. Lastly, we would be interested in stealing the data stored by this content provider. This is possible via another Drozer command: run app.provider.query content://<content provider details>/ The entire sequence of events is shown in the following screenshot: How it works... ADB is used to establish a connection between Drozer Python server (present on computer) and Drozer agent (.apk file installed in emulator or Android device). Drozer console is initialized to run the various commands we saw. Drozer agent utilizes theAndroid OS feature of IPC to take over the role of the target application and run the various commands as the original application. There's more... Drozer not only allows users to obtain the attack surface and steal data via content providers or launch intent injection attacks, but it is way beyond that. It can be used to fuzz the application, cause local injection attacks by providing a way to inject payloads. Drozer can also be used to run various in-built exploits and can be utilized to attack Android applications via custom-developed exploits. Further, it can also run in Infrastructure mode, allowing remote connections and remote attacks. See also Launching intent injection in Android https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf Auditing iOS application using static analysis Static analysis in source code reviews is an easier technique, and employing static string searches makes it convenient to use.Static analysis is conducted on the raw or decompiled source code or on the compiled (object) code, but the analysis is conducted outside of runtime. Usually, static analysis figures out vulnerable or insecure code patterns. Getting ready For conducting static analysis of iOS applications, we need at least one iOS application and a static code scanner. Pick up any iOS application of your choice and use any static analyzer tool of your choice. We will use iOS-ScriptDroid, which is a static analysis script, developed by Android security researcher, Dinesh Shetty. How to do it... Keep the decompressed iOS application filed and note the path of the folder containing the .m files. Create an iOS-ScriptDroid.bat file by using the following code: ECHO Running ScriptDriod ... @ECHO OFF SET /P Filelocation=Please Enter Location: :: SET Filelocation=Location of the folder containing all the .m files eg: C:sourcecodeproject iOSxyz mkdir %Filelocation%OUTPUT :: Code to check for Sensitive Information storage in Phone memory grep -H -i -n -C2 -e "NSFile" "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" grep -H -i -n -e "writeToFile " "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" :: Code to check for possible Buffer overflow grep -H -i -n -e "strcat(|strcpy(|strncat(|strncpy(|sprintf(|vsprintf(|gets(" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBufferOverflow.txt" :: Code to check for usage of URL Schemes grep -H -i -n -C2 "openUrl|handleOpenURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTURLSchemes.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "webview" "%Filelocation%*.m" >> "%Filelocation%OUTPUTprobableXss.txt" :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTtweakencryption.txt" >> "%Filelocation%OUTPUTweakencryption.txt" del %Filelocation%OUTPUTtweakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -e "http://" "%Filelocation%*.m" >> "%Filelocation%OUTPUToverhttp.txt" grep -H -i -n -e "NSURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "URL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "writeToUrl" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "NSURLConnection" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "CFStream" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "NSStreamin" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "setAllowsAnyHTTPSCertificate|kCFStreamSSLAllowsExpiredRoots |kCFStreamSSLAllowsExpiredCertificates" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "kCFStreamSSLAllowsAnyRoot|continueWithoutCredentialForAuthenticationChallenge" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" ::to add check for "didFailWithError" :: Code to presence of possible SQL Content grep -H -i -F -e "db" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "database" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "insert" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "delete" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "select" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "table" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "cursor" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_prepare" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_compile" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" :: Code to check for presence of keychain usage source code grep -H -i -n -e "kSecASttr|SFHFKkey" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for Logging mechanism grep -H -i -n -F "NSLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "XLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "ZNLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for presence of password in source code grep -H -i -n -e "password|pwd" "%Filelocation%*.m" >> "%Filelocation%OUTPUTpassword.txt" :: Code to check for Debugging status grep -H -i -n -e "#ifdef DEBUG" "%Filelocation%*.m" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers ===need to work more on this grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "CLLocationManager|startUpdatingLocation|locationManager|didUpdateToLocation |CLLocationDegrees|CLLocation|CLLocationDistance|startMonitoringSignificantLocationChanges" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.m" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt How it works... Go to the command prompt and navigate to the path where iOS-ScriptDroid is placed. Run the batch file and it prompts you to input the path of the application for which you wish to perform static analysis. In our case, we arbitrarily chose an application and inputted the path of the implementation (.m) files. The script generates a folder by the name OUTPUT in the path where the .m files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The iOS-ScriptDroid gives first hand info of various iOS applications vulnerabilities present in the current applications. For instance, here are a few of them which are specific to the iOS platform. BufferOverflow.txt contains the usage of harmful functions when missing buffer limits such as strcat, strcpy, and so on are found in the application. URL Schemes, if implemented in an insecure manner, may result in access related vulnerabilities. Usage of URL schemes is listed in URLSchemes.txt. These are sefuuseful vulnerabilitydetails to know iniOS applications via static analysis. There's more... The current recipe used just iOS-ScriptDroid but there are many other options available. You can either choose to write your own script or you may use one of the free or commercial tools available. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also Auditing Android apps using static analysis Auditing iOS application using a dynamic analyzer Dynamic analysis is theruntime analysis of the application. The application is run or simulated to discover the flaws during runtime. Dynamic analysis can be tricky, especially in the case of mobile platforms. Dynamic analysis is helpful in tampering data in transmission during runtime, for example, tampering with the amount in a transaction request being sent to a payment gateway. In applications that use custom encryption to prevent attackers reading the data, dynamic analysis is useful in revealing the encrypted data, which can be reverse-engineered. Note that since iOS applications cannot be decompiled to the full extent, dynamic analysis becomes even more important in finding the sensitive data which could have been hardcoded. Getting ready For conducting dynamic analysis of iOS applications, we need at least one iOS application and a dynamic code analyzer tool. Pick up any iOS application of your choice and use any dynamic analyzer tool of your choice. In this recipe, let us use the open source tool Snoop-it. We will use an iOS app that locks files which can only be opened using PIN, pattern, and a secret question and answer to unlock and view the file. Let us see if we can analyze this app and find a security flaw in it using Snoop-it. Please note that Snoop-it only works on jailbroken devices. To install Snoop-it on your iDevice, visit https://code.google.com/p/snoop-it/wiki/GettingStarted?tm=6. We have downloaded Locker Lite from the App Store onto our device, for analysis. How to do it... Perform the following steps to conductdynamic analysis oniOS applications: Open the Snoop-it app by tapping on its icon. Navigate to Settings. Here you will see the URL through which the interface can be accessed from your machine: Please note the URL, for we will be using it soon. We have disabled authentication for our ease. Now, on the iDevice, tap on Applications | Select App Store Apps and select the Locker app: Press the home button, and open the Locker app. Note that on entering the wrong PIN, we do not get further access: Making sure the workstation and iDevice are on the same network, open the previously noted URL in any browser. This is how the interface will look: Click on the Objective-C Classes link under Analysis in the left-hand panel: Now, click on SM_LoginManagerController. Class information gets loaded in the panel to the right of it. Navigate down until you see -(void) unlockWasSuccessful and click on the radio button preceding it: This method has now been selected. Next, click on the Setup and invoke button on the top-right of the panel. In the window that appears, click on the Invoke Method button at the bottom: As soon as we click on thebutton, we notice that the authentication has been bypassed, and we can view ourlocked file successfully. How it works... Snoop-it loads all classes that are in the app, and indicates the ones that are currently operational with a green color. Since we want to bypass the current login screen, and load directly into the main page, we look for UIViewController. Inside UIViewController, we see SM_LoginManagerController, which could contain methods relevant to authentication. On observing the class, we see various methods such as numberLoginSucceed, patternLoginSucceed, and many others. The app calls the unlockWasSuccessful method when a PIN code is entered successfully. So, when we invoke this method from our machine and the function is called directly, the app loads the main page successfully. There's more... The current recipe used just onedynamic analyzer but other options and tools can also be employed. There are many challenges in doingdynamic analysis of iOS applications. You may like to use multiple tools and not just rely on one to overcome the challenges. See also https://code.google.com/p/snoop-it/ Auditing Android apps using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Keychain iniOS is an encrypted SQLite database that uses a 128-bit AES algorithm to hold identities and passwords. On any iOS device, theKeychain SQLite database is used to store user credentials such as usernames, passwords, encryption keys, certificates, and so on. Developers use this service API to instruct the operating system to store sensitive data securely, rather than using a less secure alternative storage mechanism such as a property list file or a configuration file. In this recipe we will be analyzing Keychain dump to discover stored credentials. Getting ready Please follow the given steps to prepare for Keychain dump analysis: Jailbreak the iPhone or iPad. Ensure the SSH server is running on the device (default after jailbreak). Download the Keychain_dumper binary from https://github.com/ptoomey3/Keychain-Dumper Connect the iPhone and the computer to the same Wi-Fi network. On the computer, run SSH into the iPhone by typing the iPhone IP address, username as root, and password as alpine. How to do it... Follow these steps toexamine security vulnerabilities in iOS: Copy keychain_dumper into the iPhone or iPad by issuing the following command: scp root@<device ip>:keychain_dumper private/var/tmp Alternatively, Windows WinSCP can be used to do the same: Once the binary has been copied, ensure the keychain-2.db has read access: chmod +r /private/var/Keychains/keychain-2.db This is shown in the following screenshot: Give executable right to binary: chmod 777 /private/var/tmp/keychain_dumper Now, we simply run keychain_dumper: /private/var/tmp/keychain_dumper This command will dump all keychain information, which will contain all the generic and Internet passwords stored in the keychain: How it works... Keychain in an iOS device is used to securely store sensitive information such as credentials, such as usernames, passwords, authentication tokens for different applications, and so on, along with connectivity (Wi-Fi/VPN) credentials and so on. It is located on iOS devices as an encrypted SQLite database file located at /private/var/Keychains/keychain-2.db. Insecurity arises when application developers use this feature of the operating system to store credentials rather than storing it themselves in NSUserDefaults, .plist files, and so on. To provide users the ease of not having to log in every time and hence saving the credentials in the device itself, the keychain information for every app is stored outside of its sandbox. There's more... This analysis can also be performed for specific apps dynamically, using tools such as Snoop-it. Follow the steps to hook Snoop-it to the target app, click on Keychain Values, and analyze the attributes to see its values reveal in the Keychain. More will be discussed in further recipes. Finding vulnerabilities in WAP-based mobile apps WAP-based mobile applications are mobile applications or websites that run on mobile browsers. Most organizations create a lightweight version of their complex websites to be able to run easily and appropriately in mobile browsers. For example, a hypothetical company called ABCXYZ may have their main website at www.abcxyz.com, while their mobile website takes the form m.abcxyz.com. Note that the mobile website (or WAP apps) are separate from their installable application form, such as .apk on Android. Since mobile websites run on browsers, it is very logical to say that most of the vulnerabilities applicable to web applications are applicable to WAP apps as well. However, there are caveats to this. Exploitability and risk ratings may not be the same. Moreover, not all attacks may be directly applied or conducted. Getting ready For this recipe, make sure to be ready with the following set of tools (in the case of Android): ADB WinSCP Putty Rooted Android mobile SSH proxy application installed on Android phone Let us see the common WAP application vulnerabilities. While discussing these, we will limit ourselves to mobilebrowsers only: Browser cache: Android browsers store cache in two different parts—content cache and component cache. Content cache may contain basic frontend components such as HTML, CSS, or JavaScript. Component cache contains sensitive data like the details to be populated once content cache is loaded. You have to locate the browser cache folder and find sensitive data in it. Browser memory: Browser memory refers to the location used by browsers to store the data. Memory is usually long-term storage, while cache is short-term. Browse through the browser memory space for various files such as .db, .xml, .txt, and so on. Check all these files for the presence of sensitive data. Browser history: Browser history contains the list of the URLs browsed by the user. These URLs in GET request format contain parameters. Again, our goal is to locate a URL with sensitive data for our WAP application. Cookies: Cookies are mechanisms for websites to keep track of user sessions. Cookies are stored locally in devices. Following are the security concerns with respect to cookie usage: Sometimes a cookie contains sensitive information Cookie attributes, if weak, may make the application security weak Cookie stealing may lead to a session hijack How to do it... Browser Cache: Let's look at the steps that need to be followed with browser cache: Android browser cache can be found at this location: /data/data/com.android.browser/cache/webviewcache/. You can use either ADB to pull the data from webviewcache, or use WinSCP/Putty and connect to SSH application in rooted Android phones. Either way, you will land up at the webviewcache folder and find arbitrarily named files. Refer to the highlighted section in the following screenshot: Rename the extension of arbitrarily named files to .jpg and you will be able to view the cache in screenshot format. Search through all files for sensitive data pertaining to the WAP app you are searching for. Browser Memory: Like an Android application, browser also has a memory space under the /data/data folder by the name com.android.browser (default browser). Here is how a typical browser memory space looks: Make sure you traverse through all the folders to get the useful sensitive data in the context of the WAP application you are looking for. Browser history Go to browser, locate options, navigate to History, and find the URLs present there. Cookies The files containing cookie values can be found at /data/data/com.android.browser/databases/webview.db. These DB files can be opened with the SQLite Browser tool and cookies can be obtained. There's more... Apart from the primary vulnerabilities described here mainly concerned with browser usage, all otherweb application vulnerabilities which are related to or exploited from or within a browser are applicable and need to be tested: Cross-site scripting, a result of a browser executing unsanitized harmful scripts reflected by the servers is very valid for WAP applications. The autocomplete attribute not turned to off may result in sensitive data remembered by the browser for returning users. This again is a source of data leakage. Browser thumbnails and image buffer are other sources to look for data. Above all, all the vulnerabilities in web applications, which may not relate to browser usage, apply. These include OWASP Top 10 vulnerabilities such as SQL injection attacks, broken authentication and session management, and so on. Business logic validation is another important check to bypass. All these are possible by setting a proxy to the browser and playing around with the mobile traffic. The discussion of this recipe has been around Android, but all the discussion is fully applicable to an iOS platform when testing WAP applications. Approach, steps to test, and the locations would vary, but all vulnerabilities still apply. You may want to try out iExplorer and plist editor tools when working with an iPhone or iPad. See also http://resources.infosecinstitute.com/browser-based-vulnerabilities-in-web-applications/ Finding client-side injection Client-side injection is a new dimension to the mobile threat landscape. Client side injection (also known as local injection) is a result of the injection of malicious payloads to local storage to reveal data not by the usual workflow of the mobile application. If 'or'1'='1 is injected in a mobile application on search parameter, where the search functionality is built to search in the local SQLite DB file, this results in revealing all data stored in the corresponding table of SQLite DB; client side SQL injection is successful. Notice that the payload did not to go the database on the server side (which possibly can be Oracle or MSSQL) but it did go to the local database (SQLite) in the mobile. Since the injection point and injectable target are local (that is, mobile), the attack is called a client side injection. Getting ready To get ready to find client side injection, have a few mobile applications ready to be audited and have a bunch of tools used in many other recipes throughout this book. Note that client side injection is not easy to find on account of the complexities involved; many a time you will have to fine-tune your approach as per the successful first signs. How to do it... The prerequisite to the existence of client side injection vulnerability in mobile apps is the presence of a local storage and an application feature which queries the local storage. For the convenience of the first discussion, let us learn client side SQL injection, which is fairly easy to learn as users know very well SQL Injection in web apps. Let us take the case of a mobile banking application which stores the branch details in a local SQLite database. The application provides a search feature to users wishing to search a branch. Now, if a person types in the city as Mumbai, the city parameter is populated with the value Mumbai and the same is dynamically added to the SQLite query. The query builds and retrieves the branch list for Mumbai city. (Usually, purely local features are provided for faster user experience and network bandwidth conservation.) Now if a user is able to inject harmful payloads into the city parameter, such as a wildcard character or a SQLite payload to the drop table, and the payloads execute revealing all the details (in the case of a wildcard) or the payload drops the table from the DB (in the case of a drop table payload) then you have successfully exploited client side SQL injection. Another type of client side injection, presented in OWASP Mobile TOP 10 release, is local cross-site scripting (XSS). Refer to slide number 22 of the original OWASP PowerPoint presentation here: http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks. They referred to it as Garden Variety XSS and presented a code snippet, wherein SMS text was accepted locally and printed at UI. If a script was inputted in SMS text, it would result in local XSS (JavaScript Injection). There's more... In a similar fashion, HTML Injection is also possible. If an HTML file contained in the application local storage can be compromised to contain malicious code and the application has a feature which loads or executes this HTML file, HTML injection is possible locally. A variant of the same may result in Local File Inclusion (LFI) attacks. If data is stored in the form of XML files in the mobile, local XML Injection can also be attempted. There could be morevariants of these attacks possible. Finding client-side injection is quite difficult and time consuming. It may need to employ both static and dynamic analysis approaches. Most scanners also do not support discovery of Client Side Injection. Another dimension to Client Side Injection is the impact, which is judged to be low in most cases. There is a strong counter argument to this vulnerability. If the entire local storage can be obtained easily in Android, then why do we need to conduct Client Side Injection? I agree to this argument in most cases, as the entire SQLite or XML file from the phone can be stolen, why spend time searching a variable that accepts a wildcard to reveal the data from the SQLite or XML file? However, you should still look out for this vulnerability, as HTML injection or LFI kind of attacks have malware-corrupted file insertion possibility and hence the impactful attack. Also, there are platforms such as iOS where sometimes, stealing the local storage is very difficult. In such cases, client side injection may come in handy. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M7 http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks Insecure encryption in mobile apps Encryption is one of the misused terms in information security. Some people confuse it with hashing, while others may implement encoding and call itencryption. symmetric key and asymmetric key are two types of encryption schemes. Mobile applications implement encryption to protect sensitive data in storage and in transit. While doing audits, your goal should be to uncover weak encryption implementation or the so-called encoding or other weaker forms, which are implemented in places where a proper encryption should have been implemented. Try to circumvent the encryption implemented in the mobile application under audit. Getting ready Be ready with a fewmobile applications and tools such as ADB and other file and memory readers, decompiler and decoding tools, and so on. How to do it... There are multiple types of faulty implementation ofencryption in mobile applications. There are different ways to discover each of them: Encoding (instead of encryption): Many a time, mobile app developers simply implement Base64 or URL encoding in applications (an example of security by obscurity). Such encoding can be discovered by simply doing static analysis. You can use the script discussed in the first recipe of this article for finding out such encoding algorithms. Dynamic analysis will help you obtain the locally stored data in encoded format. Decoders for these known encoding algorithms are available freely. Using any of those, you will be able to uncover the original value. Thus, such implementation is not a substitute for encryption. Serialization (instead of encryption): Another variation of faulty implementation is serialization. Serialization is the process of conversion of data objects to byte stream. The reverse process, deserialization, is also very simple and the original data can be obtained easily. Static Analysis may help reveal implementations using serialization. Obfuscation (instead of encryption): Obfuscation also suffers from similar problems and the obfuscated values can be deobfuscated. Hashing (instead of encryption): Hashing is a one-way process using a standard complex algorithm. These one-way hashes suffer from a major problem in that they can be replayed (without needing to recover the original data). Also, rainbow tables can be used to crack the hashes. Like other techniques described previously, hashing usage in mobile applications can also be discovered via static analysis. Dynamic analysis may additionally be employed to reveal the one-way hashes stored locally. How it works... To understand the insecure encryption in mobile applications, let us take a live case, which we observed. An example of weak custom implementation While testing a live mobile banking application, me and my colleagues came across a scenario where a userid and mpin combination was sent by a custom encoding logic. The encoding logic here was based on a predefined character by character replacement by another character, as per an in-built mapping. For example: 2 is replaced by 4 0 is replaced by 3 3 is replaced by 2 7 is replaced by = a is replaced by R A is replaced by N As you can notice, there is no logic to the replacement. Until you uncover or decipher the whole in-built mapping, you won't succeed. A simple technique is to supply all possible characters one-by-one and watch out for the response. Let's input userid and PIN as 222222 and 2222 and notice the converted userid and PIN are 444444 and 4444 respectively, as per the mapping above. Go ahead and keep changing the inputs, you will create a full mapping as is used in the application. Now steal the user's encoded data and apply the created mapping, thereby uncovering the original data. This whole approach is nicely described in the article mentioned under the See also section of this recipe. This is a custom example of faulty implementation pertaining to encryption. Such kinds of faults are often difficult to find in static analysis, especially in the case of difficult to reverse apps such as iOS applications. The possibility of automateddynamic analysis discovering this is also difficult. Manual testing and analysis stands, along with dynamic or automated analysis, a better chance of uncovering such customimplementations. There's more... Finally, I would share another application we came across. This one used proper encryption. The encryption algorithm was a well known secure algorithm and the key was strong. Still, the whole encryption process can be reversed. The application had two mistakes; we combined both of them to break the encryption: The application code had the standard encryption algorithm in the APK bundle. Not even obfuscation was used to protect the names at least. We used the simple process of APK to DEX to JAR conversion to uncover the algorithm details. The application had stored the strong encryption key in the local XML file under the /data/data folder of the Android device. We used adb to read this xml file and hence obtained the encryption key. According to Kerckhoff's principle, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer. This is how all encryption algorithms are implemented. The key is the secret, not the algorithm. In our scenario, we could obtain the key and know the name of the encryption algorithm. This is enough to break the strong encryption implementation. See also http://www.paladion.net/index.php/mobile-phone-data-encryption-why-is-it-necessary/ Discovering data leakage sources Data leakage risk worries organizations across the globe and people have been implementing solutions to prevent data leakage. In the case of mobile applications, first we have to think what could be the sources or channels for data leakage possibility. Once this is clear, devise or adopt a technique to uncover each of them. Getting ready As in other recipes, here also you need bunch of applications (to be analyzed), an Android device or emulator, ADB, DEX to JAR converter, Java decompilers, Winrar, or Winzip. How to do it... To identify the data leakage sources, list down all possible sources you can think of for the mobile application under audit. In general, all mobile applications have the following channels of potential data leakage: Files stored locally Client side source code Mobile device logs Web caches Console messages Keystrokes Sensitive data sent over HTTP How it works... The next step is to uncover the data leakage vulnerability at these potential channels. Let us see the six previously identified common channels: Files stored locally: By this time, readers are very familiar with this. The data is stored locally in files like shared preferences, xml files, SQLite DB, and other files. In Android, these are located inside the application folder under /data/data directory and can be read using tools such as ADB. In iOS, tools such as iExplorer or SSH can be used to read the application folder. Client side source code: Mobile application source code is present locally in the mobile device itself. The source code in applications has been hardcoding data, and a common mistake is hardcoding sensitive data (either knowingly or unknowingly). From the field, we came across an application which had hardcoded the connection key to the connected PoS terminal. Hardcoded formulas to calculate a certain figure, which should have ideally been present in the server-side code, was found in the mobile app. Database instance names and credentials are also a possibility where the mobile app directly connects to a server datastore. In Android, the source code is quite easy to decompile via a two-step process—APK to DEX and DEX to JAR conversion. In iOS, the source code of header files can be decompiled up to a certain level using tools such as classdump-z or otool. Once the raw source code is available, a static string search can be employed to discover sensitive data in the code. Mobile device logs: All devices create local logs to store crash and other information, which can be used to debug or analyze a security violation. A poor coding may put sensitive data in local logs and hence data can be leaked from here as well. Android ADB command adb logcat can be used to read the logs on Android devices. If you use the same ADB command for the Vulnerable Bank application, you will notice the user credentials in the logs as shown in the following screenshot: Web caches: Web caches may also contain the sensitive data related to web components used in mobile apps. We discussed how to discover this in the WAP recipe in this article previously. Console messages: Console messages are used by developers to print messages to the console while application development and debugging is in progress. Console messages, if not turned off while launching the application (GO LIVE), may be another source of data leakage. Console messages can be checked by running the application in debug mode. Keystrokes: Certain mobile platforms have been known to cache key strokes. A malware or key stroke logger may take advantage and steal a user's key strokes, hence making it another data leakage source. Malware analysis needs to be performed to uncover embedded or pre-shipped malware or keystroke loggers with the application. Dynamic analysis also helps. Sensitive data sent over HTTP: Applications either send sensitive data over HTTP or use a weak implementation of SSL. In either case, sensitive data leakage is possible. Usage of HTTP can be found via static analysis to search for HTTP strings. Dynamic analysis to capture the packets at runtime also reveals whether traffic is over HTTP or HTTPS. There are various SSL-related weak implementation and downgrade attacks, which make data vulnerable to sniffing and hence data leakage. There's more... Data leakage sources can be vast and listing all of them does not seem possible. Sometimes there are applications or platform-specific data leakage sources, which may call for a different kind of analysis. Intent injection can be used to fire intents to access privileged contents. Such intents may steal protected data such as the personal information of all the patients in a hospital (under HIPPA compliance). iOS screenshot backgrounding issues, where iOS applications store screenshots with populated user input data, on the iPhone or iPAD when the application enters background. Imagine such screenshots containing a user's credit card details, CCV, expiry date, and so on, are found in an application under PCI-DSS compliance. Malwares give a totally different angle to data leakage. Note that data leakage is a very big risk organizations are tackling today. It is not just financial loss; losses may be intangible, such as reputation damage, or compliance or regulatory violations. Hence, it makes it very important to identify the maximum possible data leakage sources in the application and rectify the potential leakages. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M4 Launching intent injection in Android Other application-based attacks in mobile devices When we talk about application-based attacks, OWASP TOP 10 risks are the very first things that strike. OWASP (www.owasp.org) has a dedicated project to mobile security, which releases Mobile Top 10. OWASP gathers data from industry experts and ranks the top 10 risks every three years. It is a very good knowledge base for mobile application security. Here is the latest Mobile Top 10 released in the year 2014: M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections Getting ready Have a few applications ready to be analyzed, use the same set of tools we have been discussing till now. How to do it... In this recipe, we restrict ourselves to other application attacks. The attacks which we have not covered till now in this book are: M1: Weak Server Side Controls M5: Poor Authorization and Authentication M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling How it works... Currently, let us discuss client-side or mobile-side issues for M5, M8, and M9. M5: Poor Authorization and Authentication A few common scenarios which can be attacked are: Authentication implemented at device level (for example, PIN stored locally) Authentication bound on poor parameters (such as UDID or IMEI numbers) Authorization parameter responsible for access to protected application menus is stored locally These can be attacked by reading data using ADB, decompiling the applications, and conducting static analysis on the same or by doing dynamic analysis on the outgoing traffic. M8: Security Decisions via Untrusted Inputs This one talks about IPC. IPC entry points forapplications to communicate to one other, such as Intents in Android or URL schemes in iOS, are vulnerable. If the origination source is not validated, the application can be attacked. Malicious intents can be fired to bypass authorization or steal data. Let us discuss this in further detail in the next recipe. URL schemes are a way for applications to specify the launch of certain components. For example, the mailto scheme in iOS is used to create a new e-mail. If theapplications fail to specify the acceptable sources, any malicious application will be able to send a mailto scheme to the victim application and create new e-mails. M9: Improper Session Handling From a purely mobile device perspective, session tokens stored in .db files or oauth tokens, or strings granting access stored in weakly protected files, are vulnerable. These can be obtained by reading the local data folder using ADB. See also https://www.owasp.org/index.php/P;rojects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks Launching intent injection in Android Android uses intents to request action from another application component. A common communication is passing Intent to start a service. We will exploit this fact via an intent injection attack. An intent injection attack works by injecting intent into the application component to perform a task that is usually not allowed by the application workflow. For example, if the Android application has a login activity which, post successful authentication, allows you access to protected data via another activity. Now if an attacker can invoke the internal activity to access protected data by passing an Intent, it would be an Intent Injection attack. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and following the installation instructions mentioned in the User Guide. Install Drozer Console Agent and start a session as mentioned in the User Guide. If your installation is correct, you should get a Drozer command prompt (dz>).   How to do it... You should also have a few vulnerable applications to analyze. Here we chose the OWASP GoatDroid application: Start the OWASP GoatDroid Fourgoats application in emulator. Browse the application to develop understanding. Note that you are required to authenticate by providing a username and password, and post-authentication you can access profile and other pages. Here is the pre-login screen you get: Let us now use Drozer to analyze the activities of the Fourgoats application. The following Drozer command is helpful: run app.activity.info -a <package name> Drozer detects four activities with null permission. Out of these four, ViewCheckin and ViewProfile are post-login activities. Use Drozer to access these two activities directly, via the following command: run app.activity.start --component <package name> <activity name> We chose to access ViewProfile activity and the entire sequence of activities is shown in the following screenshot: Drozer performs some actions and the protected user profile opens up in the emulator, as shown here: How it works... Drozer passed an Intent in the background to invoke the post-login activity ViewProfile. This resulted in ViewProfile activity performing an action resulting in display of profile screen. This way, an intent injection attack can be performed using Drozer framework. There's more... Android usesintents also forstarting a service or delivering a broadcast. Intent injection attacks can be performed on services and broadcast receivers. A Drozer framework can also be used to launch attacks on the app components. Attackers may write their own attack scripts or use different frameworks to launch this attack. See also Using Drozer to find vulnerabilities in Android applications https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf https://www.eecs.berkeley.edu/~daw/papers/intents-mobisys11.pdf Resources for Article: Further resources on this subject: Mobile Devices[article] Development of Windows Mobile Applications (Part 1)[article] Development of Windows Mobile Applications (Part 2)[article]
Read more
  • 0
  • 0
  • 30291
article-image-implementing-artificial-neural-networks-tensorflow
Packt
08 Jul 2016
12 min read
Save for later

Implementing Artificial Neural Networks with TensorFlow

Packt
08 Jul 2016
12 min read
In this article by Giancarlo Zaccone, the author of Getting Started with TensorFlow, we will learn about artificial neural networks (ANNs), an information processing system whose operating mechanism is inspired by biological neural circuits. Thanks to their characteristics, neural networks are the protagonists of a real revolution in machine learning systems and more generally in the context of Artificial Intelligence. An artificial neural network possesses many simple processing units variously connected to each other, according to various architectures. If we look at the schema of an ANN report, it can be seen that the hidden units communicate with the external layer, both in input and output, while the input and output units communicate only with the hidden layer of the network Each unit or node simulates the role of the neuron in biological neural networks, a node, said artificial neuron, plays a very simple operation: becomes active if the total quantity of signal, which it receives exceeds its activation threshold, defined by the so-called activation function. If a node becomesactive, it emits a signal that is transmitted along the transmission channels up to the other unit to which it is connected. A connection point acts as a filter that converts the message into an inhibitory or excitatory signal increasing or decreasing the intensity, according to their individual characteristics. The connection points simulate the biological synapses and have the fundamental function to weigh the intensity of the transmitted signals, by multiplying them by the weights whose value depends on the connection itself. ANN schematic diagram Neural network architectures The way to connect the nodes, the total number of layers, that is, the levels of nodes between input and output, define the architecture of a neural network. For example, in a multilayer networks, one can identify the artificial neurons of layers such that: Each neuron is connected with all those of the next layer There are no connections between neurons belonging to the same layer The number of layers and of neurons per layer depends on the problem to be solved Now we start our exploration of neural network models, introducing the most simple neural network model: the Single Layer Perceptron or the so-called Rosenblatt's Perceptron. Single Layer Perceptron The Single Layer Perceptron was the first neural network model proposed in 1958 by Frank Rosenblatt. In this model, the content of the local memory of the neuron consists of a vector of weights, W = (w1, w2,……, wn). The computation is performed over the calculation of a sum of the input vector X =(x1, x2,……, xn), each of which is multiplied by the corresponding element of the vector of the weights; then the value provided in the output (that is, a weighted sum) will be the input of an activation function. This function returns 1 if the result is greater than a certain threshold, otherwise it returns -1. In the following figure, the activation function is the so-called sign function:         +1        x > 0 sign(x) =         −1        otherwise It is possible to use other activation functions, preferably non-linear (such as the sigmoid function that we will see in the next section). The learning procedure of the net is iterative: it slightly modifies for each learning cycle (called epoch) the synaptic weights by using a selected set called training set. At each cycle, the weights must be modified so as to minimize a cost function, which is specific to the problem under consideration. Finally, when the perceptron will be trained on the training set, it will be able to be tested on other inputs (the test set) in order to verify its capacity of generalization. Schema of a Rosemblatt's Perceptron Let's now see how to implement a single layer neural network for an image classification problem using TensorFlow. The logistic regression This algorithm has nothing to do with the canonical linear regression, but it is an algorithm that allows us to solve supervised classification problems. In fact to estimate the dependent variable, now we make use of the so-called logistic function or sigmoid. It is precisely because of this feature that we call this algorithm logistic regression.The sigmoid function has this pattern: As we can see, the dependent variable takes values strictly between 0 and 1 that is precisely what serves us. In the case of the logistic regression we want, then, that our function tell us what's the probability of belonging to a particular element of our class. We recall again that the supervised learning by the neural network is configured as an iterative process of optimization of the weights; these are then modified on the basis of the network's performance of the training set. Indeed the aim is to the loss function which indicates the degree to which the behavior of the network deviates from the desired one. The performance of the network is then verified on a test set, consisting of images other than those of train. The basic steps of training that we're going to implement are as follows: The weights are initialized with random values at the beginning of the training. For each element of the training set is calculated the error, that is, the difference between the desired output and the actual output. This error is used to adjust the weights The process is repeated resubmitting to the network, in a random order, all the examples of the training set until the error made on the entire training set is not less than a certain threshold or until the number of iterations are over. Let's now see in detail how to implement the logistic regression with TensorFlow. The problem we want to solve is yet to classify images from the MNIST dataset. The TensorFlow implementation First of all, we have to import all the necessary libraries: import input_data import tensorflow as tf import matplotlib.pyplot as plt We use the input_data.readfunction, to upload the images to our problem: mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) Then we set the total number of epochs for the training phase: training_epochs = 25 Also we must define other parameters necessary for the model building: learning_rate = 0.01 batch_size = 100 display_step = 1 Now we move to the construction of the model Building the model Define x as the input tensor, it represents the MNIST data image of shape 28 x 28 = 784 pixels x = tf.placeholder("float", [None, 784]) We recall that our problem consists in assigning a probability value for each of the possible classes of membership (the numbers from 0 to 9). At the end of this calculation, we will use a probability distribution, which gives us the value of what is confident with our prediction. So the output we're going to get will be an output tensor with 10 probabilities each one corresponding to a digit (of course the sum of probabilities must be one): y = tf.placeholder("float", [None, 10]) To assign probabilities to each image, we will use the so-called softmax activation function. The softmax function is specified in two main steps: Calculate the evidence that a certain image belongs to a particular class. Convert the evidence into probabilities of belonging to each of the 10 possible classes. To evaluate the evidence, we first define the weights input tensor asW: W = tf.Variable(tf.zeros([784, 10])) For a given image, we could evaluate the evidence for each class isimply multiplying the tensorWwith the input tensorx. Using TensorFlow, we should have something like this: evidence = tf.matmul(x, W) In general, the models include an extra parameter representing the bias that indicates a certain degree of uncertainty; in our case, the final formula for the evidence is: evidence = tf.matmul(x, W) + b It means that for everyi(from 0 to 9) we have aWimatrix elements784  (28 × 28), where each elementjof the matrix is multiplied by the correspondingcomponentjof the input image (784 parts) that is added and the corresponding bias elementbi. So to define the evidence, we must define the following tensor of biases: b = tf.Variable(tf.zeros([10])) The second step is finally to use thesoftmaxfunction to obtain the output vector of probabilities, namelyactivation: activation = tf.nn.softmax(tf.matmul(x, W) + b) The TensorFlow's functiontf.nn.softmaxprovides a probability based output from the input evidence tensor. Once we implement the model, we can proceed to specify the necessary code to find the W weights and biases b network through the iterative training algorithm. In each iteration, the training algorithm takes the training data, applies the neural network, and compares the result with the expected. In order to train our model and to know when we have a good one, we must know how to define the accuracy of our model. Our goal is to try to get valuesof parameters W and b that minimize the value of the metric that indicates how bad the model is. Different metrics calculate the degree of error between the desired output and output of the training data. A common measure of error is the mean squared error or the Squared Euclidean Distance. However, there are some research findings that suggest to use other metrics to a neural network like this. In this example, we use the so-called cross-entropy error function, it is defined as follows: cross_entropy = y*tf.lg(activation) In order to minimize the cross_entropy, we could use the following combination of tf.reduce_mean and tf.reduce_sum to build the cost function: cost = tf.reduce_mean          (-tf.reduce_sum            (cross_entropy, reduction_indices=1)) Then we must minimize it using the gradient descent optimization algorithm: optimizer = tf.train.GradientDescentOptimizer  (learning_rate).minimize(cost) Few lines of code to build a neural network model! Launching the session It's the moment to build the session and launch our neural net model.We fix these lists to visualize the training session: avg_set = [] epoch_set=[] Then we initialize the TensorFlow variables: init = tf.initialize_all_variables() Start the session: with tf.Session() as sess: sess.run(init) As explained, each epoch is a training cycle:     for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) Then we loop over all batches:         for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) Fit training using batch data: sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys}) Compute the average loss running the train_step function with the given image values (x) and the real output (y_): avg_cost += sess.run                         (cost, feed_dict={x: batch_xs,                                  y: batch_ys})/total_batch During the computation, we display a log per epoch step:         if epoch % display_step == 0:             print "Epoch:", '%04d' % (epoch+1), "cost=","{:.9f}".format(avg_cost)     print " Training phase finished" Let's get the accuracy of our mode.It is correct if the index with the highest y value is the same as in the real digit vector the mean of correct_prediction gives us the accuracy. We need to run the accuracy function with our test set (mnist.test). We use the keys images and labelsfor x and y_: correct_prediction = tf.equal                            (tf.argmax(activation, 1), tf.argmax(y, 1))       accuracy = tf.reduce_mean                        (tf.cast(correct_prediction, "float"))    print "MODEL accuracy:", accuracy.eval({x: mnist.test.images,                                       y: mnist.test.labels}) Test evaluation We have seen the training phase in the preceding sections; for each epoch we have printed the relative cost function: Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 Type "copyright", "credits" or "license()" for more information. >>> ======================= RESTART ============================ >>> Extracting /tmp/data/train-images-idx3-ubyte.gz Extracting /tmp/data/train-labels-idx1-ubyte.gz Extracting /tmp/data/t10k-images-idx3-ubyte.gz Extracting /tmp/data/t10k-labels-idx1-ubyte.gz Epoch: 0001 cost= 1.174406662 Epoch: 0002 cost= 0.661956009 Epoch: 0003 cost= 0.550468774 Epoch: 0004 cost= 0.496588717 Epoch: 0005 cost= 0.463674555 Epoch: 0006 cost= 0.440907706 Epoch: 0007 cost= 0.423837747 Epoch: 0008 cost= 0.410590841 Epoch: 0009 cost= 0.399881751 Epoch: 0010 cost= 0.390916621 Epoch: 0011 cost= 0.383320325 Epoch: 0012 cost= 0.376767031 Epoch: 0013 cost= 0.371007620 Epoch: 0014 cost= 0.365922904 Epoch: 0015 cost= 0.361327561 Epoch: 0016 cost= 0.357258660 Epoch: 0017 cost= 0.353508228 Epoch: 0018 cost= 0.350164634 Epoch: 0019 cost= 0.347015593 Epoch: 0020 cost= 0.344140861 Epoch: 0021 cost= 0.341420144 Epoch: 0022 cost= 0.338980592 Epoch: 0023 cost= 0.336655581 Epoch: 0024 cost= 0.334488012 Epoch: 0025 cost= 0.332488823 Training phase finished As wesaw, during the training phase, the cost function is minimized.At the end of the test, we show how accurately the model is implemented: Model Accuracy: 0.9475 >>> Finally, using these lines of code, we could visualize the the training phase of the net: plt.plot(epoch_set,avg_set, 'o',  label='Logistic Regression Training phase') plt.ylabel('cost') plt.xlabel('epoch') plt.legend() plt.show() Training phase in logistic regression Summary In this article, we learned the implementation of artificial neural networks, Single Layer Perceptron, TensorFlow. We also learned how to build the model and launch the session.
Read more
  • 0
  • 0
  • 7135

article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 16833
Modal Close icon
Modal Close icon