Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-create-enterprise-grade-angular-forms-typescript-tutorial
Sugandha Lahoti
04 Jul 2018
11 min read
Save for later

Create enterprise-grade Angular forms in TypeScript [Tutorial]

Sugandha Lahoti
04 Jul 2018
11 min read
Typescript is an open-source programming language which adds optional static typing to Javascript. To give you a flavor of the benefits of TypeScript, let’s have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators In this article, we will learn how to build forms with typescript. We will cover as much as it takes to build business applications that collect user information. Here is a breakdown of what you should expect from this article: Typed form input and output Form controls Validation Form submission and handling This article is an excerpt from the book, TypeScript 2.x for Angular Developers, written by Chris Nwamba. Creating types for forms We want to try to utilize TypeScript as much as possible, as it simplifies our development process and makes our app behavior more predictable. For this reason, we will create a simple data class to serve as a type for the form values. First, create a new Angular project to follow along with the examples. Then, use the following command to create a new class: ng g class flight The class is generated in the app folder; replace its content with the following data class: export class Flight { constructor( public fullName: string, public from: string, public to: string, public type: string, public adults: number, public departure: Date, public children?: number, public infants?: number, public arrival?: Date, ) {} } This class represents all the values our form (yet to be created) will have. The properties that are succeeded by a question mark (?) are optional, which means that TypeScript will throw no errors when the respective values are not supplied. Before jumping into creating forms, let's start with a clean slate. Replace the app.component.html file with the following: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <!-- TODO: Form here --> </div> </div> Run the app and leave it running. You should see the following at port 4200 of localhost (remember to include Bootstrap): The form module Now that we have a contract that we want the form to follow, let's now generate the form's component: ng g component flight-form The command also adds the component as a declaration to our App module: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, // Component added after // being generated FlightFormComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } What makes Angular forms special and easy to use are functionalities provided out-of-the-box, such as the NgForm directive. Such functionalities are not available in the core browser module but in the form module. Hence, we need to import them: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; // Import the form module import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, FlightFormComponent ], imports: [ BrowserModule, // Add the form module // to imports array FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Simply importing and adding FormModule to the imports array is all we needed to do. Two-way binding The perfect time to start showing some form controls using the form component in the browser is right now. Keeping the state in sync between the data layer (model) and the view can be very challenging, but with Angular it's just a matter of using one directive exposed from FormModule: <!-- ./app/flight-form/flight-form.component.html --> <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > </div> </form> Angular relies on the name attribute internally to carry out binding. For this reason, the name attribute is required. Pay attention to [(ngModel)]="flightModel.fullName"; it's trying to bind a property on the component class to the form. This model will be of the Flight type, which is the class we created earlier: // ./app/flight-form/flight-form.component.ts import { Component, OnInit } from '@angular/core'; import { Flight } from '../flight'; @Component({ selector: 'app-flight-form', templateUrl: './flight-form.component.html', styleUrls: ['./flight-form.component.css'] }) export class FlightFormComponent implements OnInit { flightModel: Flight; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } ngOnInit() {} } The flightModel property is added to the component as a Flight type and initialized with some default values. Include the component in the app HTML, so it can be displayed in the browser: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <app-flight-form></app-flight-form> </div> </div> This is what you should have in the browser: To see two-way binding in action, use interpolation to display the value of flightModel.fullName. Then, enter a value and see the live update: <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > {{flightModel.fullName}} </div> </form> Here is what it looks like: More form fields Let's get hands-on and add the remaining form fields. After all, we can't book a flight by just supplying our names. The from and to fields are going to be select boxes with a list of cities we can fly into and out of. This list of cities will be stored right in our component class, and then we can iterate over it in the template and render it as a select box: export class FlightFormComponent implements OnInit { flightModel: Flight; // Array of cities cities:Array<string> = [ 'Lagos', 'Mumbai', 'New York', 'London', 'Nairobi' ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } } The array stores a few cities from around the world as strings. Let's now use the ngFor directive to iterate over the cities and display them on the form using a select box: <div class="row"> <div class="col-md-6"> <label for="from">From</label> <select type="text" id="from" class="form-control" [(ngModel)]="flightModel.from" name="from"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> <div class="col-md-6"> <label for="to">To</label> <select type="text" id="to" class="form-control" [(ngModel)]="flightModel.to" name="to"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> </div> Neat and clean! You can open the browser and see it right there: The select drop-down, when clicked, shows a list of cities, as expected: Next, let's add the trip type field (radio buttons), the departure date field (date control), and the arrival date field (date control): <div class="row" style="margin-top: 15px"> <div class="col-md-5"> <label for="" style="display: block">Trip Type</label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="One Way"> One way </label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="Return"> Return </label> </div> <div class="col-md-4"> <label for="departure">Departure</label> <input type="date" id="departure" class="form-control" [(ngModel)]="flightModel.departure" name="departure"> </div> <div class="col-md-3"> <label for="arrival">Arrival</label> <input type="date" id="arrival" class="form-control" [(ngModel)]="flightModel.arrival" name="arrival"> </div> </div> How the data is bound to the controls is very similar to the text and select fields that we created previously. The major difference is the types of control (radio buttons and dates): Lastly, add the number of passengers (adults, children, and infants): <div class="row" style="margin-top: 15px"> <div class="col-md-4"> <label for="adults">Adults</label> <input type="number" id="adults" class="form-control" [(ngModel)]="flightModel.adults" name="adults"> </div> <div class="col-md-4"> <label for="children">Children</label> <input type="number" id="children" class="form-control" [(ngModel)]="flightModel.children" name="children"> </div> <div class="col-md-4"> <label for="infants">Infants</label> <input type="number" id="infants" class="form-control" [(ngModel)]="flightModel.infants" name="infants"> </div> </div> The passengers section are all of the number type because we are just expected to pick the number of passengers coming onboard from each category: Validating the form and form fields Angular greatly simplifies form validation by using its built-in directives and state properties. You can use the state property to check whether a form field has been touched. If it's touched but violates a validation rule, you can use the ngIf directive to display associated errors. Let's see an example of validating the full name field: <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" id="fullName" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" #name="ngModel" required minlength="6"> </div> We just added three extra significant attributes to our form's full name field: #name, required, and minlength. The #name attribute is completely different from the name attribute in that the former is a template variable that holds information about this given field via the ngModel value while the latter is the usual form input name attribute. In Angular, validation rules are passed as attributes, which is why required and minlength are there. Yes, the fields are validated, but there are no feedbacks to the user on what must have gone wrong. Let's add some error messages to be shown when form fields are violated: <div *ngIf="name.invalid && (name.dirty || name.touched)" class="text-danger"> <div *ngIf="name.errors.required"> Name is required. </div> <div *ngIf="name.errors.minlength"> Name must be at least 6 characters long. </div> </div> The ngIf directive shows these div elements conditionally: If the form field has been touched but there's no value in it, the Name is required error is shown Name must be at least 6 characters long is also shown when the field is touched but the content length is less than 6. The following two screenshots show these error outputs in the browser: A different error is shown when a value is entered but the value text count is not up to 6: Submitting forms We need to consider a few factors before submitting a form: Is the form valid? Is there a handler for the form prior to submission? To make sure that the form is valid, we can disable the Submit button: <form #flightForm="ngForm"> <div class="form-group" style="margin-top: 15px"> <button class="btn btn-primary btn-block" [disabled]="!flightForm.form.valid"> Submit </button> </div> </form> First, we add a template variable called flightForm to the form and then use the variable to check whether the form is valid. If the form is invalid, we disable the button from being clicked: To handle the submission, add an ngSubmit event to the form. This event will be called when the button is clicked: <form #flightForm="ngForm" (ngSubmit)="handleSubmit()"> ... </form> You can now add a class method, handleSubmit, to handle the form submission. A simple log to the console may be just enough for this example: export class FlightFormComponent implements OnInit { flightModel: Flight; cities:Array<string> = [ ... ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } // Handle for submission handleSubmit() { console.log(this.flightModel); } } We discussed about collecting user inputs via forms. We covered important features of forms, such as typed inputs, validation, two-way binding, submission, and so on. All these interesting methods will prepare you for getting started with building business applications. If you liked our article, you may read our book TypeScript 2.x for Angular Developers, to learn to use typed DOM events and event handling among other interesting things to do with Typescript. Typescript 2.9 release candidate is here How to install and configure TypeScript How to work with classes in Typescript
Read more
  • 0
  • 0
  • 30546

article-image-enhancing-image-search-with-vector-similarity
Bahaaldine Azarmi, Jeff Vestal
12 Mar 2024
12 min read
Save for later

Enhancing Image Search with Vector Similarity

Bahaaldine Azarmi, Jeff Vestal
12 Mar 2024
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Vector Search for Practitioners with Elastic, by Bahaaldine Azarmi and Jeff Vestal. Optimize your search capabilities in Elastic by operationalizing and fine-tuning vector search and enhance your search relevance while improving overall search performanceIntroductionVector similarity search plays a crucial role in image search. After images are transformed into vectors, a search query (also represented as a vector) is compared against the database of image vectors to find the most similar matches. This process is known as k-Nearest Neighbor (kNN) search, where “k” represents the number of similar items to retrieve.Several algorithms can be used for kNN search, including brute-force search and more efficient methods such as the Hierarchical Navigable Small World (HNSW) algorithm (see Chapter 7, Next Generation of Observability Powered, by Vectors for a more in-depth discussion on HNSW). Bruteforce search involves comparing the query vector with every vector in the database, which can be computationally expensive for large databases. On the other hand, HNSW is an optimized algorithm that can quickly find the nearest neighbors in a large-scale database, making it particularly useful for vector similarity search in image search systems.The tangible benefits of image search are observed across industries. Its flexibility and adaptability make it a tool of choice for enhancing user experiences, ensuring digital security, or even revolutionizing digital content interactions.Image search in practiceApplications of image search are varied and far-reaching. In e-commerce, for example, reverse image search allows customers to upload a photo of a product and find similar items for sale. In the field of digital forensics, image search can be used to find visually similar images across a database to detect illicit content. It is also used in the realm of social media for face recognition, image tagging, and content recommendation.As we continue to generate and share more visual content, the need for effective and efficient image search technology will only grow. The combination of artificial intelligence, machine learning, and vector similarity search provides a powerful toolkit to meet this demand, powering a new generation of image search capabilities that can analyze and understand visual content.Traditionally, image search engines use text-based metadata associated with images, such as the image’s filename, alt text, and surrounding text context, to understand the content of an image. This approach, however, is limited by the accuracy and completeness of the metadata, and it fails to analyze the actual visual content of the image itself.Over time, with advancements in artificial intelligence and machine learning, more sophisticated methods of image search have been developed that can analyze the visual content of images directly. This technique, known as content-based image retrieval (CBIR), involves extracting feature vectors from images and using these vectors to find visually similar images.Feature vectors are a numerical representation of an image’s visual content. They are generated by applying a feature extraction algorithm to the image. The specifics of the feature extraction process can vary, but in general, it involves analyzing the image’s colors, textures, and shapes. In recent years, CNNs have become a popular tool for feature extraction due to their ability to capture complex patterns in image data.Once feature vectors have been extracted from a set of images, these vectors can be indexed in a database. When a new query image is submitted, its feature vector is compared to the indexed vectors, and the images with the most similar vectors are returned as the search results. The similarity between vectors is typically measured using distance metrics such as Euclidean distance or cosine similarity.Despite the impressive capabilities of CBIR systems, there are several challenges in implementing them. For instance, interpreting and understanding the semantic meaning of images is a complex task due to the subjective nature of visual perception. Furthermore, the high dimensionality of image data can make the search process computationally expensive, particularly for large databases.To address these challenges, approximate nearest neighbor (ANN) search algorithms, such as the HNSW graph, are often used to optimize the search process. These algorithms sacrifice a small amount of accuracy for a significant increase in search speed, making them a practical choice for large-scale image search applications.With the advent of Elasticsearch’s dense vector field type, it is now possible to index and search highdimensional vectors directly within an Elasticsearch cluster. This functionality, combined with an appropriate feature extraction model, provides a powerful toolset for building efficient and scalable image search systems.In the following sections, we will delve into the details of image feature extraction, vector indexing, and search techniques. We will also demonstrate how to implement an image search system using Elasticsearch and a pre-trained CNN model for feature extraction. The overarching goal is to provide a comprehensive guide for building and optimizing image search systems using state-of-the-art technology.Vector search with imagesVector search is a transformative feature of Elasticsearch and other vector stores that enables a method for performing searches within complex data types such as images. Through this approach, images are converted into vectors that can be indexed, searched, and compared against each other, revolutionizing the way we can retrieve and analyze image data. This inherent characteristic of producing embeddings applies to other media types as well. This section provides an in-depth overview of the vector search process with images, including image vectorization, vector indexing in Elasticsearch, kNN search, vector similarity metrics, and fine-tuning the kNN algorithm.Image vectorizationThe first phase of the vector search process involves transforming the image data into a vector, a process known as image vectorization. Deep learning models, specifically CNNs, are typically employed for this task. CNNs are designed to understand and capture the intricate features of an image, such as color distribution, shapes, textures, and patterns. By processing an image through layers of convolutional, pooling, and fully connected nodes, a CNN can represent an image as a high-dimensional vector. This vector encapsulates the key features of the image, serving as its numerical representation.The output layer of a pre-trained CNN (often referred to as an embedding or feature vector) is often used for this purpose. Each dimension in this vector represents some learned feature from the image. For instance, one dimension might correspond to the presence of a particular color or texture pattern.The values in the vector quantify the extent to which these features are present in the image.Figure 1 : Layers of a CNN modelAs seen in the preceding diagram, these are the layers of a CNN model:1. Accepts raw pixel values of the image as input.2. Each layer extracts specific features such as edges, corners, textures, and so on.3. Introduces non-linearity, learns from errors, and approximates more complex functions.4. Reduces the dimensions of feature maps through down-sampling to decrease the computational complexity.5. Consists of the weights and biases from the previous layers for the classification process to take place.6. Outputs a probability distribution over classes.Indexing image vectors in ElasticsearchOnce the image vectors have been obtained, the next step is to index these vectors in Elasticsearch for future searching. Elasticsearch provides a special field type, the dense_vector field, to handle the storage of these high-dimensional vectors.A dense_vector field is defined as an array of numeric values, typically floating-point numbers, with a specified number of dimensions (dims). The maximum number of dimensions allowed for indexed vectors is currently 2,048, though this may be further increased in the future. It’s essential to note that each dense_vector field is single-valued, meaning that it is not possible to store multiple values in one such field.In the context of image search, each image (now represented as a vector) is indexed into an Elasticsearch document. This vector can be one per document or multiple vectors per document. The vector representing the image is stored in a dense_vector field within the document. Additionally, other relevant information or metadata about the image can be stored in other fields within the same document.The full example code can be found in the Jupyter Notebook available in the chapter 5 folder of this book’s GitHub repository at https://github.com/PacktPublishing/VectorSearch-for-Practitioners-with-Elastic/tree/main/chapter5, but we’ll discuss the relevant parts here.First, we will initialize a pre-trained model using the SentenceTransformer library.The clip-ViT-B-32-multilingual-v1 model is discussed in detail later in this chapter:model = SentenceTransformer('clip-ViT-B-32-multilingual-v1')Next, we will prepare the image transformation function:transform = transforms.Compose([ transforms.Resize(224), transforms.CenterCrop(224), lambda image: image.convert("RGB"), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])Transforms.Compose() combines all the following transformations:transforms.Resize(224): Resizes the shorter side of the image to 224 pixels while maintaining the aspect ratio.transforms.CenterCrop(224): Crops the center of the image so that the resultant image has dimensions of 224x224 pixels.lambda image: image.convert("RGB"): This is a transformation that converts the image to the RGB format. This is useful for grayscale images or images with an alpha channel, as deep learning models typically expect RGB inputs.transforms.ToTensor(): Converts the image (in the PIL image format) into a PyTorch tensor. This will change the data from a range of [0, 255] in the PIL image format to a float in a range [0.0, 1.0].transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)): Normalizes the tensor image with a given mean and standard deviation for each channel. In this case, the mean and standard deviation for all three channels (R, G, B) are 0.5. This normalization will transform the data range from [0.0, 1.0] to [-1.0, 1.0].We can use the following code to apply the transform to an image file and then generate an image vector using the model. See the Python notebook for this chapter to run against actual image files:from PIL import Image img = Image.open("image_file.jpg") image = transform(img).unsqueeze(0) image_vector = model.encode(image)The vector and other associated data can then be indexed into Elasticsearch for use with kNN search:# Create document document = {'_index': index_name, '_source': {"filename": filename, "image_vector": vector See the complete code in the chapter 5 folder of this book’s GitHub repository.With vectors generated and indexed into Elasticsearch, we can move on to searching for similar images.k-Nearest Neighbor (kNN) searchWith the vectors now indexed in Elasticsearch, the next step is to make use of kNN search. You can refer back to Chapter 2, Getting Started with Vector Search in Elastic, for a full discussion on kNN and HNSW search.As with text-based vector search, when performing vector search with images, we first need to convert our query image to a vector. The process is the same as we used to convert images to vectors at index time.We convert the image to a vector and include that vector in the query_vector parameter of the knn search function:knn = { "field": "image_vector", "query_vector": search_image_vector[0], "k": 1, "num_candidates": 10 }Here, we specify the following:field: The field in the index that contains vector representations of images we are searching againstquery_vector: The vector representation of our query imagek: We want only one closest imagenum_candidates: The number of approximate nearest neighbor candidates on each shard to search againstWith an understanding of how to convert an image to a vector representation and perform an approximate nearest neighbor search, let’s discuss some of the challenges.Challenges and limitations with image searchWhile vector search with images offers powerful capabilities for image retrieval, it also comes with certain challenges and limitations. One of the main challenges is the high dimensionality of image vectors, which can lead to computational inefficiencies and difficulties in visualizing and interpreting the data.Additionally, while pre-trained models for feature extraction can capture a wide range of features, they may not always align with the specific features that are relevant to a particular use case. This can lead to suboptimal search results. One potential solution, not limited to image search, is to use transfer learning to fine-tune the feature extraction model on a specific task, although this requires additional data and computational resources.ConclusionIn conclusion, vector similarity search revolutionizes image retrieval by harnessing advanced algorithms and machine learning. From e-commerce to digital forensics, its impact is profound, enhancing user experiences and content discovery. Leveraging techniques like k-Nearest Neighbor search and Elasticsearch's dense vector field, image search becomes more efficient and scalable. Despite challenges, such as high dimensionality and feature alignment, ongoing advancements promise even greater insights into visual data. As technology evolves, so does our ability to navigate and understand the vast landscape of images, ensuring a future of enhanced digital interactions and insights.Author BioBahaaldine Azarmi, Global VP Customer Engineering at Elastic, guides companies as they leverage data architecture, distributed systems, machine learning, and generative AI. He leads the customer engineering team, focusing on cloud consumption, and is passionate about sharing knowledge to build and inspire a community skilled in AI.Jeff Vestal has a rich background spanning over a decade in financial trading firms and extensive experience with Elasticsearch. He offers a unique blend of operational acumen, engineering skills, and machine learning expertise. As a Principal Customer Enterprise Architect, he excels at crafting innovative solutions, leveraging Elasticsearch's advanced search capabilities, machine learning features, and generative AI integrations, adeptly guiding users to transform complex data challenges into actionable insights.
Read more
  • 0
  • 0
  • 30457

article-image-inside-googles-project-dragonfly-china-ambitions
Aarthi Kumaraswamy
16 Oct 2018
8 min read
Save for later

OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?

Aarthi Kumaraswamy
16 Oct 2018
8 min read
Wired has managed to do what Congress couldn’t - bring together tech industry leaders in the US and ask the pressing questions of our times, in a safe and welcoming space. Just for this, they deserve applause. Yesterday at Wired 25 summit, Sundar Pichai, Google’s CEO, among other things, opened up to Backchannel’s Editor in chief, Steven Levy, about Project Dragonfly for the first time in public. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. The following is my analysis of why Google is deeply invested in project Dragonfly.  Google’s mission since its inception has been to organize the world’s information and to make it universally accessible, as Steven puts it. When asked if this has changed in 2018, Pichai responded saying Google’s mission remains the same, and so do its founding values. However what has changed is the scale at which their operation, their user base, and their product portfolio. In effect, this means the company now views everything it does from a wider lens instead of just thinking about its users. [embed]https://www.facebook.com/wired/videos/vb.19440638720/178516206400033/?type=2&theater[/embed] For Google, China is an untapped source of information “We are compelled by our mission [to] provide information to everyone, and [China is] 20 percent of the world's population”,  said Pichai. He believes China is a highly innovative and underserved market that is too big to be ignored. For this reason, according to Pichai at least, Google is obliged to take a long-term view on the subject. But there are a number of specific reasons that make China compelling to Google right now. China is a huge social experiment at scale, with wide-scale surveillance and monitoring - in other words, data. But with the Chinese government keen to tightly control information about the country and its citizens, its not necessarily well understood by businesses from outside the country. This means moving into China could be an opportunity for Google to gain a real competitive advantage in a number of different ways. Pichai confirmed that internal tests show that Google can serve well over 99 percent of search queries from users in China. This means they probably have a good working product prototype to launch soon, should a window of opportunity arise. These lessons can then directly inform Google’s decisions about what to do next in China. What can Google do with all that exclusive knowledge? Pichai wrote earlier last week to some Senate members who wanted answers on Project Dragonfly that Google could have “broad benefits inside and outside of China.” He did not go into detail, but these benefits are clear. Google would gain insight into a huge country that tightly controls information about itself and its citizens. Helping Google to expand into new markets By extension, this will then bring a number of huge commercial advantages when it comes to China. It would place Google in a fantastic position to make China another huge revenue stream. Secondly, the data harvested in the process could provide a massive and critical boost to Google’s AI research, products and tooling ecosystems that others like Facebook don’t have access to. The less obvious but possibly even bigger benefits for Google are the wider applications of its insights. These will be particularly useful as it seeks to make inroads into other rapidly expanding markets such as India, Brazil, and the African subcontinent. Helping Google to consolidate its strength in western nations As well as helping Google expand, it’s also worth noting that Google’s Chinese venture could support the company as it seeks to consolidate and reassert itself in the west. Here, markets are not growing quickly, but Google could do more to advance its position within these areas using what it learns from business and product innovations in China. The caveat: Moral ambivalence is a slippery slope Let’s not forget that the first step into moral ambiguity is always the hardest. Once Google enters China, the route into murky and morally ambiguous waters actually gets easier. Arguably, this move could change the shape of Google as we know it. While the company may not care if it makes a commercial impact, the wider implications of how tech companies operate across the planet could be huge. How is Google rationalizing the decision to re-enter China Letting a billion flowers bloom and wither to grow a global forest seems to be at the heart of Google’s decision to deliberately pursue China’s market. Following are some ways Google has been justifying its decision We never left China When asked about why Google has decided to go back to China after exiting the market in 2010, Pichai clarified that Google never left China. They only stopped providing search services there. Android, for example, has become one of the popular mobile OSes in China over the years. He might as well have said ‘I already have a leg in the quicksand, might as well dip the other one’. Instead of assessing the reasons to stay in China through the lens of their AI principles, Google is jumping into the state censorship agenda. Being legally right is morally right “Any time we are working in countries around the world, people don't understand fully, but you're always balancing a set of values... Those values include providing access to information, freedom of expression, and user privacy… But we also follow the rule of law in every country,” said Pichai in the Wired 25 interview. This seems to imply that Google sees legal compliance analogous ethical practices. While the AI principles at Google should have guided them regarding situations precisely like this one, it has reduced to an oversimplified ‘don’t create killer AI’ tenet.  Just this Tuesday, China passed a law that is explicit about how it intends to use technology to implement extreme measures to suppress free expression and violate human rights. Google is choosing to turn a blind eye to how its technology could be used to indirectly achieve such nefarious outcomes in an efficient manner. We aren’t the only ones doing business in China Another popular reasoning, though not mentioned by Google, is that it is unfair to single out Google and ask them to not do business in China when others like Apple have been benefiting from such a relationship over the years. Just because everyone is doing something, it does not make it intrinsically right. As a company known for challenging the status quo and for stand by its values, this marks the day when Google lost its credentials to talk about doing the right thing. Time and tech wait for none. If we don’t participate, we will be left behind Pichai said, “Technology ends up progressing whether we want it to or not. I feel on every important technology it is important that you work aggressively to make sure the outcome is good.” Now that is a typical engineering response to a socio-philosophical problem. It reeks of hubris that most tech executives in Silicon Valley wear as badges of honor. We’re making information universally accessible and thus enriching lives Pichai observed that in China there are many areas, such as cancer treatment options, where Google can provide better and more authentic information than what products and services available. I don’t know about you, but when an argument leans on cancer to win its case, I typically disregard it. All things considered, in the race for AI domination, China’s data is the holy grail. An invitation to watch and learn from close quarters is an offer too good to refuse, for even Google. Even as current and former employees, human rights advocacy organizations, and Senate members continue to voice their dissent strongly, Google is sending a clear message that it isn’t going to back down on Project Dragonfly. The only way to stop this downward moral spiral at this point appears to be us, the Google current users, as the last line of defense to protect human rights, freedom of speech and other democratic values. That gives me a sinking feeling as I type this post in Google docs, use Chrome and Google search to gather information just way I have been doing for years now. Are we doomed to a dystopian future, locked in by tech giants that put growth over stability, viral ads over community, censorship, and propaganda over truth and free speech? Welcome to 1984.
Read more
  • 0
  • 0
  • 30451

article-image-firefox-nightly-browser-debugging-your-app-is-now-fun-with-mozillas-new-time-travel-feature
Natasha Mathur
30 Jul 2018
3 min read
Save for later

Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature

Natasha Mathur
30 Jul 2018
3 min read
Earlier this month, Mozilla announced a fancy new feature called “Time Travel debugging” for its Firefox Nightly web browser at the JSConf EU 2018.  With time travel debugging, you can easily track the bugs in your code or app as it lets you pause and rewind to the exact time when your app broke down. Time travel debugging technology is particularly useful for local web development where it allows you to pause and step forward or backward, pause and rewind to a previous state, rewind to the time a console message was logged and rewind to the time where an element had a certain style. It is also great for times where you might want to save user recordings or view a test recording when the testing fails. With time travel debugging, you can record a tab on your browser and later replay it using WebReplay, an experimental project which allows you to record, rewind and replay the processes for the web. According to Jason Laster, a Senior Software Engineer at Mozilla,“ with time travel, we have a full recording of time, you can jump to any point in the path and see it immediately, you don’t have to refresh or re-click or pause or look at logs”. Here’s a video of Jason Laster talking about the potential of time travel debugging. JSConf  He also mentioned how time travel is “not a new thing” and he was inspired by Dan Abramov, creator of Redux when he showcased Redux at JSConfEU saying how he wanted “time travel” to “reduce his action over time”. With Redux, you get a slider that shows you all the actions over time and as you’re moving, you get to see the UI update as well. In fact, Mozilla rebuilt the debugger in order to use React and redux for its time travel feature. Their debugger comes equipped with Redux dev tools, which shows a list of all the actions for the debugger. So, the dev tools show you the state of the app, sources, and the pause data. Finally, Laster added how “this is just the beginning” and that “they hope to pull this off well in the future”. To use this new time travel debugging feature, you must install the Firefox Nightly browser first. For more details on the new feature, check out the official documentation. Mozilla is building a bridge between Rust and JavaScript Firefox has made a password manager for your iPhone Firefox 61 builds on Firefox Quantum, adds Tab Warming, WebExtensions, and TLS 1.3  
Read more
  • 0
  • 0
  • 30438

article-image-how-far-will-facebook-go-to-fix-what-it-broke-democracy-trust-reality
Aarthi Kumaraswamy
24 Sep 2018
19 min read
Save for later

How far will Facebook go to fix what it broke: Democracy, Trust, Reality

Aarthi Kumaraswamy
24 Sep 2018
19 min read
Facebook, along with other tech media giants, like Twitter and Google, broke the democratic process in 2016. Facebook also broke the trust of many of its users as scandal after scandal kept surfacing telling the same story in different ways - the story of user data and trust abused in exchange for growth and revenue. The week before last, Mark Zuckerberg posted a long explanation on Facebook titled ‘Preparing for Elections’. It is the first of a series of reflections by Zuckerberg that ‘address the most important issues facing Facebook’. That post explored what Facebook is doing to avoid ending up in a situation similar to the 2016 elections when the platform ‘inadvertently’ became a super-effective channel for election interference of various kinds. It follows just weeks after Facebook COO, Sheryl Sandberg appeared in front of a Senate Intelligence hearing alongside Twitter CEO, Jack Dorsey on the topic of social media’s role in election interference. Zuckerberg’s mobile-first rigor oversimplifies the issues Zuckerberg opened his post with a strong commitment to addressing the issues plaguing Facebook using the highest levels of rigor the company has known in its history. He wrote, “I am bringing the same focus and rigor to addressing these issues that I've brought to previous product challenges like shifting our services to mobile.”  To understand the weight of this statement we must go back to how Facebook became a mobile-first company that beat investor expectations wildly. Suffice to say it went through painful years of restructuring and reorientation in the process. Those unfamiliar with that phase of Facebook, please read the section ‘How far did Facebook go to become a mobile-first company?’ at the end of this post for more details. To be fair, Zuckerberg does acknowledge that pivoting to mobile was a lot easier than what it will take to tackle the current set of challenges. He writes, “These issues are even harder because people don't agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much on security. We have a lot of work ahead, but I am confident we will end this year with much more sophisticated approaches than we began, and that the focus and investments we've put in will be better for our community and the world over the long term.” However, what Zuckerberg does not acknowledge in the above statement is that the current set of issues is not merely a product challenge, but a business ethics and sustainability challenge. Unless ‘an honest look in the mirror’ kind of analysis is done on that side of Facebook, any level of product improvements will only result in cosmetic changes that will end in an ‘operation successful, patient dead’ scenario. In the coming sections, I attempt to dissect Zuckerberg’s post in the context of the above points by reading between the lines to see how serious the platform really is about changing its ways to ‘be better for our community and the world over the long term’. Why does Facebook’s commitment to change feel hollow? Let’s focus on election interference in this analysis as Zuckerberg limits his views to this topic in his post. Facebook has been at the center of this story on many levels. Here is some context on where Zuckerberg is coming from.   Facebook’s involvement in the 2016 election meddling Apart from the traditional cyber-attacks (which they had even back then managed to prevent successfully), there were Russia-backed coordinated misinformation campaigns found on the platform. Then there was also the misuse of its user data by data analytics firm, Cambridge Analytica, which consulted on election campaigning. They micro-profiled users based on their psychographics (the way they think and behave) to ensure more effective ad spending by political parties. There was also the issue of certain kinds of ads, subliminal messages and peer pressure sent out to specific Facebook users during elections to prompt them to vote for certain candidates while others did not receive similar messages. There were also alleged reports of a certain set of users having been sent ‘dark posts’ (posts that aren’t publicly visible to all, but visible only to those on the target list) to discourage them from voting altogether. It also appears that Facebook staff offered both the Clinton and the Trump campaigns to assist with Facebook advertising. The former declined the offer while the latter accepted. We don’t know which of the above and to what extent each of these decisions and actions impacted the outcome of the 2016 US presidential elections. But one thing is certain, collectively they did have a significant enough impact for Zuckerberg and team to acknowledge these are serious problems that they need to address, NOW! Deconstructing Zuckerberg’s ‘Protecting Elections’ Before diving into what is problematic about the measures that are taken (or not taken) by Facebook, I must commend them for taking ownership of their role in election interference in the past and for attempting to rectify the wrongs. I like that Zuckerberg has made himself vulnerable by sharing his corrective plans with the public while it is a work in progress and is engaging with the public at a personal level. Facebook’s openness to academic research using anonymized Facebook data and their willingness to permit publishing findings without Facebook’s approval is also noteworthy. Other initiatives such as the political ad transparency report, AI enabled fake account & fake news reduction strategy, doubling the content moderator base, improving their recommendation algorithms are all steps in the right direction. However, this is where my list of nice things to say ends. The overall tone of Zuckerberg’s post is that of bargaining rather than that of acceptance. Interestingly this was exactly the tone adopted by Sandberg as well in the Senate hearing earlier this month, down to some very similar phrases. This makes one question if everything isn’t just one well-orchestrated PR disaster management plan. Disappointingly, most of the actions stated in Zuckerberg's post feel like half-measures; I get the sense that they aren’t willing to go the full distance to achieve the objectives they set for themselves. I hope to be wrong. 1. Zuckerberg focuses too much on ‘what’ and ‘how’, is ignoring the ‘why’ Zuckerberg identifies three key issues he wants to address in 2018: preventing election interference, protecting the community from abuse, and providing users with better control over their information. This clarity is a good starting point. In this post, he only focuses on the first issue. So I will reserve sharing my detailed thoughts on the other two for now. What I would say for now is that the key to addressing all issues on Facebook is taking a hard look at Facebook policies, including privacy, from a mission statement perspective. In other words, be honest about ‘Why Facebook exists’. Users are annoyed, advertisers are not satisfied and neither are shareholders confident about Facebook’s future. Trying to be everyone’s friend is clearly not working for Facebook. As such, I expected this in the opening part of the series. ‘Be better for our community and the world over the long term’ is too vague of a mission statement to be of any practical use. 2. Political Ad transparency report is necessary, but not sufficient In May this year, Facebook released its first political ad transparency report as a gesture to show its commitment to minimizing political interference. The report allows one to see who sponsored which issue advertisement and for how much. This was a move unanimously welcomed by everyone and soon others like Twitter and Google followed suit. By doing this, Facebook hopes to allow its users to form more informed views about political causes and other issues.   Here is my problem with this feature. (Yes, I do view this report as a ‘feature’ of the new Facebook app which serves a very specific need: to satisfy regulators and media.) The average Facebook user is not the politically or technologically savvy consumer. They use Facebook to connect with friends and family and maybe play silly games now and then. The majority of these users aren’t going to proactively check out this ad transparency report or the political ad database to arrive at the right conclusions. The people who will find this report interesting are academic researchers, campaign managers, and analysts. It is one more rich data point to understand campaign strategy and thereby infer who the target audience is. This could most likely lead to a downward spiral of more and more polarizing ads from parties across the spectrum. 3. How election campaigning, hate speech, and real violence are linked but unacknowledged Another issue closely tied with political ads is hate speech and violence-inciting polarising content that aren’t necessarily paid ads. These are typical content in the form of posts, images or videos that are posted as a response to political ads or discourses. These act as carriers that amplify the political message, often in ways unintended by the campaigners themselves. The echo chambers still exist. And the more one's ecosystem or ‘look-alike audience’ responds to certain types of ads or posts, users are more likely to keep seeing them, thanks to Facebook's algorithms. Seeing something that is endorsed by one’s friends often primes one to trust what is said without verifying the facts for themselves thus enabling fake news to go viral. The algorithm does the rest to ensure everyone who will engage with the content sees it. Newsy political ads will thrive in such a setup while getting away with saying ‘we made full disclosure in our report’. All of this is great for Facebook’s platform as it not only gets great engagement from the content but also increased ad spendings from all political parties as they can’t afford to be missing from action on Facebook. A by-product of this ultra-polarised scenario though is more protectionism and less free, open and meaningful dialog and debate between candidates as well as supporters on the platform. That’s bad news for the democratic process. 4. Facebook’s election interference prevention model is not scalable Their single-minded focus on eliminating US election interference on Facebook’s platforms through a multipronged approach to content moderation is worth appreciating. This also makes one optimistic about Facebook’s role in consciously attempting to do the right thing when it comes to respecting election processes in other nations as well. But the current approach of creating an ‘election war room’ is neither scalable nor sustainable. What happens everytime a constituency in the US has some election or some part of the world does? What happens when multiple elections take place across the world simultaneously? Who does Facebook prioritize to provide election interference defense support and why? Also, I wouldn’t go too far to trust that they will uphold individual liberties in troubled nations with strong regimes or strong divisive political discourses. What happens when the ruling party is the one interfering with the elections? Who is Facebook answerable to? 5. Facebook’s headcount hasn’t kept up with its own growth ambitions  Zuckerberg proudly states in his post that they’ve deleted a billion fake accounts with machine learning and have double the number of people hired to work on safety and security. "With advances in machine learning, we have now built systems that block millions of fake accounts every day. In total, we removed more than one billion fake accounts -- the vast majority within minutes of being created and before they could do any harm -- in the six months between October and March. ....it is still very difficult to identify the most sophisticated actors who build their networks manually one fake account at a time. This is why we've also hired a lot more people to work on safety and security -- up from 10,000 last year to more than 20,000 people this year." ‘People working on safety and security’ could have a wide range of job responsibilities from network security engineers to security guards hired at Facebook offices. What is missing conspicuously in the above picture is a breakdown of the number of people hired specifically to fact check, moderate content and resolve policy related disputes and review flagged content. With billions of users posting on Facebook, the job of content moderators and policy enforcers, even when assisted by algorithms, is massive. It is important that they are rightly incentivized to do their job well and are set clear and measurable goals. The post neither talks of how Facebook plans to reward moderators and neither does it talk about what the yardsticks for performance in this area would be. Facebook fails to acknowledge that it is not fully prepared, partly because it is understaffed. 6. The new Product Policy Director, human rights role is a glorified Public Relations job The weekend following Zuckerberg’s post, a new job opening appeared on Facebook’s careers page for the position of ‘Product policy director, human rights’. Below snippet is taken from that job posting. Source: Facebook careers The above is typically what a Public relations head does as well. Not only are the responsibilities cited above heavily communication and public perception building based, there’s not much given in terms of authority to this role to influence how other teams achieve their goals. Simply put, this role ‘works with, coordinates or advises teams’, it does not ‘guide or direct teams’. Als,o another key point to observe is that this role aims to add another layer of distance to further minimize exposure for Zuckerberg, Sandberg and other top key executives in public forums such as congressional hearings or press meets. Any role/area that is important to a business typically finds a place at the C-suite table. Had this new role been one of the c-suite roles it would have been advertised so, and it may have had some teeth. Of the 24 key executives in Facebook, only one is concerned with privacy and policy, ‘Chief Privacy Officer & VP of U.S. Public Policy’. Even this role does not have a global directive or public welfare in mind. On the other hand, there are multiple product development, creative and business development roles on Facebook’s c-suite. There is even a separate watch product head, a messaging product head, and one just dedicated to China called ‘Head of Creative Shop - Greater China’. This is why Facebook’s plan to protect elections will fail I am afraid Facebook’s greatest strength is also it’s Achilles heel. The tech industry’s deified hacker culture is embodied perfectly by Facebook. Facebook’s ad revenue based flawed business model is the ingenious creation of that very hacker culture. Any attempts to correct everything else is futile without correcting the issues with the current model. The ad revenue based model is why the Facebook app is designed the way it is: with ‘relevant’ news feeds, filter bubbles and look-alike audience segmentation. It is the reason why viral content gets rewarded irrespective of its authenticity or the impact it has on society. It is also the reason why Facebook has a ‘move fast and break things’ internal culture where growth at all costs is favored and idolized. Facebook’s Q2 2018 Earnings summary highlights the above points succinctly. Source: Facebook's SEC Filing The above snapshot means that even if we assume all 30k odd employees do some form of content moderation (the probability of which is zero), every employee is responsible for 50k users’ content daily. Let’s say every user only posts 1 post a day. If we assume Facebook’s news feed algorithms are super efficient and only find 2% of the user content questionable/fake (as speculated by Sandberg in her Senate hearing this month), that would still mean nearly 1k posts per person to review every day!   What can Facebook do to turn over a new leaf? Unless Facebook attempts to sincerely address at least some of the below, I will continue to be skeptical of any number of beautifully written posts by Zuckerberg or patriotically orated speeches by Sandberg. A content moderation transparency report that shares not just the number of posts moderated, the number of people working to moderate content on Facebook but also the nature of content moderated, the moderators’ job satisfaction levels, their tenure, qualifications, career aspirations, their challenges, and how much Facebook is investing in people, processes and technology to make its platform safe and objective for everyone to engage with others. A general Ad transparency report that not only lists advertisers on Facebook but also their spendings and chosen ad filters for the public and academia to review or analyze any time. Taking responsibility for the real-world consequences of actions enabled by Facebook. Like the recent gender and age discrimination employment ads shown on Facebook. Really banning hate speech and fake viral content. Bring in a business/AI ethics head who is only next to Zuckerberg and equal to Sandberg’s COO role. Exploring and experimenting with other alternative revenue channels to tackle the current ad-driven business model problem. Resolving the UI problem so that users can gain back control over their data and make it easy to choose to not participate in Facebook’s data experiments. This would mean a potential loss in some ad revenue. The ‘grow hacker’ culture problem that is a byproduct of years of moving fast and breaking things. This would mean a significant change in behavior by everyone starting from the top and probably restructuring the way teams are organized and business is done. It would also mean a different definition and measurement of success which could lead to shareholder backlash. But Mark is uniquely placed to withstand these pressures given his clout over the board voting powers. Like Augustus Caesar his role model, Zuckerberg has a chance to make history. But he might have to put the company through hard and sacrificing times in exchange for the proverbial 200 years of world peace. He’s got the best minds and limitless resources at his disposal to right what he and his platform wronged. But he would have to make enemies with the hands that feed him. Would he rise to the challenge? Like Augustus who is rumored to have killed his grandson, will Zuckerberg ever be prepared to kill his ad revenue generating brainchild? In the meanwhile, we must not underestimate the power of good digital citizenry. We must continue to fight the good fight to move tech giants like Facebook in the right direction. Just as persistent trickling water droplets can erode mountains and create new pathways, so can our mindful actions as digital platform users prompt major tech reforms. It could be as bold as deleting one's Facebook account (I haven’t been on the platform for years now, and I don’t miss it at all). You could organize groups to create awareness on topics like digital privacy, fake news, filter bubbles, or deliberately choose to engage with those whose views differ from yours to understand their perspective on topics and thereby do your part in reversing algorithmically accentuated polarity. It could also be by selecting the right individuals to engage in informed dialog with tech conglomerates. Not every action needs to be hard though. It could be as simple as customizing your default privacy settings or choosing to only spend a select amount of time on such platforms, or deciding to verify the authenticity and assessing the toxicity of a post you wish to like, share or forward to your network. Addendum How far did Facebook go to become a mobile-first company? Following are some of the things Facebook did to become the largest mobile advertising platform in the world, surpassing Google by a huge margin. Clear purpose and reason for the change: “For one, there are more mobile users. Second, they’re spending more time on it... third, we can have better advertising on mobile, make more money,” said Zuckerberg at TechCrunch Disrupt back in 2012 on why they were becoming mobile first. In other words, there was a lot of growth and revenue potential in investing in this space. This was a simple and clear ‘what’s in it for me’ incentive for everyone working to make the transition as well for stockholders and advertisers to place their trust in Zuckerberg’s endeavors. Setting company-wide accountability: “We realigned the company around, so everybody was responsible for mobile.”, said the then President of Business and Marketing Partnerships David Fischer to Fortune in 2013. Willing to sacrifice desktop for mobile: Facebook decided to make a bold gamble to lose its desktop users to grow its unproven mobile platform. Essentially it was willing to bet its only cash cow for a dark horse that was dependent on so many other factors to go right. Strict consequences for non-compliance: Back in the days of transitioning to a mobile-first company Zuckerberg famously said to all his product teams that when they went in for reviews: “Come in with mobile. If you come in and try to show me a desktop product, I’m going to kick you out. You have to come in and show me a mobile product.” Expanding resources and investing in reskilling: They grew their team of 20 mobile engineers to literally all engineers at Facebook undergoing training courses on iOS and Android development. “we’ve completely changed the way we do product development. We’ve trained all our engineers to do mobile first.”, said Facebook’s VP of corporate development, Vaughan Smith to TechCrunch by the end of 2012. Realigning product design philosophy: Designed custom features for the mobile-first interface instead of trying to adapt the features for the web to mobile. In other words, they began with mobile as their default user interface. Local and global user behavior sensitization: Some of their engineering teams even did field visits to developing nations like the Philippines to see first hand how mobile apps are being used there. Environmental considerations in app design: Facebook even had the foresight to consider scenarios where mobile users may not have quality internet signals or poor quality mobile battery related issues. They designed their apps keeping these future needs in mind.
Read more
  • 0
  • 0
  • 30437

article-image-opencv-4-0-releases-with-experimental-vulcan-g-api-module-and-qr-code-detector-among-others
Natasha Mathur
21 Nov 2018
2 min read
Save for later

OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others

Natasha Mathur
21 Nov 2018
2 min read
Two months after the OpenCV team announced the alpha release of Open CV 4.0, the final version 4.0 of OpenCV is here. OpenCV 4.0 was announced last week and is now available as a c++11 library that requires a c++ 11- compliant compiler. This new release explores features such as a G-API module, QR code detector, performance improvements, and DNN improvements among others. OpenCV is an open source library of programming functions which is mainly aimed at real-time computer vision. OpenCV is cross-platform and free for use under the open-source BSD license. Let’s have a look at what’s new in OpenCV 4.0. New Features G-API: OpenCV 4.0 comes with a completely new module opencv_gapi. G-API is an engine responsible for very efficient image processing, based on the lazy evaluation and on-fly construction of the processing graph. QR code detector and decoder: OpenCV 4.0 comprises QR code detector and decoder that has been added to opencv/objdetect module along with a live sample. The decoder is currently built on top of QUirc library. Kinect Fusion algorithm: A popular Kinect Fusion algorithm has been implemented, optimized for CPU and GPU (OpenCL), and integrated into opencv_contrib/rgbd module.  Kinect 2 support has also been updated in opencv/videoio module to make the live samples work. DNN improvements Support has been added for Mask-RCNN model. A new Integrated ONNX parser has been added. Support added for popular classification networks such as the YOLO object detection network. There’s been an improvement in the performance of the DNN module in OpenCV 4.0 when built with Intel DLDT support by utilizing more layers from DLDT. OpenCV 4.0 comes with experimental Vulkan backend that has been added for the platforms where OpenCL is not available. Performance improvements In OpenCV 4.0, hundreds of basic kernels in OpenCV have been rewritten with the help of "wide universal intrinsics". Wide universal intrinsics map to SSE2, SSE4, AVX2, NEON or VSX intrinsics, depending on the target platform and the compile flags. This leads to better performance, even for the already optimized functions. Support has been added for IPP 2019 using the IPPICV component upgrade. For more information, check out the official release notes. Image filtering techniques in OpenCV 3 ways to deploy a QT and OpenCV application OpenCV and Android: Making Your Apps See
Read more
  • 0
  • 0
  • 30408
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-cognito-for-secure-mobile-and-web-user-authentication-tutorial
Natasha Mathur
04 Jul 2018
13 min read
Save for later

Amazon Cognito for secure mobile and web user authentication [Tutorial]

Natasha Mathur
04 Jul 2018
13 min read
Amazon Cognito is a user authentication service that enables user sign-up and sign-in, and access control for mobile and web applications, easily, quickly, and securely. In Amazon Cognito, you can create your user directory, which allows the application to work when the devices are not online. Amazon Cognito supports, to scale, millions of users and authenticates users from social identity providers such as Facebook, Google, Twitter, Amazon, or enterprise identity providers, such as Microsoft Active Directory through SAML, or your own identity provider system. Today, we will discuss the AWS Cognito service for simple and secure user authentication for mobile and web applications. With Amazon Cognito, you can concentrate on developing great application experiences for the user, instead of worrying about developing secure and scalable application solutions for handling the access control permissions of users and synchronization across the devices. Let's explore topics that fall under AWS Cognito and see how it can be used for user authentication from AWS. This article is an excerpt from a book 'Expert AWS Development' written by Atul V. Mistry. Amazon Cognito benefits Amazon Cognito is a fully managed service and it provides User Pools for a secure user directory to scale millions of users; these User Pools are easy to set up. Amazon Cognito User Pools are standards-based identity providers, Amazon Cognito supports many identity and access management standards such as OAuth 2.0, SAML 2.0, OAuth 2.0 and OpenID Connect. Amazon Cognito supports the encryption of data in transit or at rest and multi-factor authentication. With Amazon Cognito, you can control access to the backend resource from the application. You can control the users by defining roles and map different roles for the application, so they can access the application resource for which they are authorized. Amazon Cognito can integrate easily with the sign-up and sign-in for the app because it provides a built-in UI and configuration for different federating identity providers. It provides the facility to customize the UI, as per company branding, in front and center for user interactions. Amazon Cognito is eligible for HIPAA-BAA and is compliant with PCI DSS, SOC 1-3, and ISO 27001. Amazon Cognito features Amazon Cognito provides the following features: Amazon Cognito Identity User Pools Federated Identities Amazon Cognito Sync Data synchronization Today we will discuss User Pools and Federated Identities in detail. Amazon Cognito User Pools Amazon Cognito User Pools helps to create and maintain a directory for users and adds sign-up/sign-in to mobile or web applications. Users can sign in to a User Pool through social or SAML-based identity providers. Enhanced security features such as multi-factor authentication and email/phone number verification can be implemented for your application. With AWS Lambda, you can customize your workflows for Amazon Cognito User Pools such as adding application specific logins for user validation and registration for fraud detection. Getting started with Amazon Cognito User Pools You can create Amazon Cognito User Pools through Amazon Cognito Console, AWS Command Line Interface (CLI), or Amazon Cognito Application Programming Interface (API). Now let's understand all these different ways of creating User Pools. Amazon Cognito User Pool creation from the console Please perform the following steps to create a User Pool from the console. Log in to the AWS Management console and select the Amazon Cognito service. It will show you two options, such as Manage your User Pools and Manage Federated Identities, as shown: Select Manage Your User Pools. It will take you to the Create a user pool screen. You can add the Pool name and create the User Pool. You can create this user pool in two different ways, by selecting: Review defaults: It comes with default settings and if required, you can customize it Step through settings: Step by step, you can customize each setting: When you select Review defaults, you will be taken to the review User Pool configuration screen and then select Create pool. When you will select Step through settings, you will be taken to the Attributes screen to customize it. Let's understand all the screens in brief: Attributes: This gives the option for users to sign in with a username, email address, or phone number. You can select standard attributes for user profiles as well create custom attributes. Policies: You can set the password strength, allow users to sign in themselves, and stipulate days until expire for the newly created account. MFA and verifications: This allows you to enable Multi-Factor Authentication, and configure require verification for emails and phone numbers. You create a new IAM role to set permissions for Amazon Cognito that allows you to send SMS message to users on your behalf. Message customizations: You can customize messages to verify an email address by providing a verification code or link. You can customize user invitation messages for SMS and email but you must include the username and a temporary password. You can customize email addresses from SES-verified identities. Tags: You can add tags for this User Pool by providing tag keys and their values. Devices: This provides settings to remember a user's device. It provides options such as Always, User Opt In, and No. App clients: You can add app clients by giving unique IDs and an optional secret key to access this User Pool. Triggers: You can customize workflows and user experiences by triggering AWS Lambda functions for different events. Reviews: This shows you all the attributes for review. You can edit any attribute on the Reviews screen and then click on Create pool. It will create the User Pool. After creating a new User Pool, navigate to the App clients screen. Enter the App client name as CognitoDemo and click on Create app client: Once this Client App is generated, you can click on the show details to see App client secret: Pool Id, App client id, and App client secret are required to connect any application to Amazon Cognito. Now, we will explore an Amazon Cognito User Pool example to sign up and sign in the user. Amazon Cognito example for Android with mobile SDK In this example, we will perform some tasks such as create a new user, request confirmation code for a new user through email, confirm user, user login, and so on. Create a Cognito User Pool: To create a User Pool with the default configuration, you have to pass parameters to the CognitoUserPool constructor, such as application context, userPoolId, clientId, clientSecret, and cognitoRegion (optional): CognitoUserPool userPool = new CognitoUserPool(context, userPoolId, clientId, clientSecret, cognitoRegion); New user sign-up: Please perform the following steps to sign up new users: Collect information from users such as username, password, given name, phone number, and email address. Now, create the CognitoUserAttributes object and add the user value in a key-value pair to sign up for the user: CognitoUserAttributes userAttributes = new CognitoUserAttributes(); String usernameInput = username.getText().toString(); String userpasswordInput = password.getText().toString(); userAttributes.addAttribute("Name", name.getText().toString()); userAttributes.addAttribute("Email", email.getText().toString()); userAttributes.addAttribute("Phone", phone.getText().toString()); userPool.signUpInBackground(usernameInput, userpasswordInput, userAttributes, null, signUpHandler); To register or sign up a new user, you have to call SignUpHandler. It contains two methods: onSuccess and onFailure. For onSuccess, it will call when it successfully registers a new user. The user needs to confirm the code required to activate the account. You have to pass parameters such as Cognito user, confirm the state of the user, medium and destination of the confirmation code, such as email or phone, and the value for that: SignUpHandler signUpHandler = new SignUpHandler() { @Override public void onSuccess(CognitoUser user, boolean signUpConfirmationState, CognitoUserCodeDeliveryDetails cognitoUserCodeDeliveryDetails) { // Check if the user is already confirmed if (signUpConfirmationState) { showDialogMessage("New User Sign up successful!","Your Username is : "+usernameInput, true); } } @Override public void onFailure(Exception exception) { showDialogMessage("New User Sign up failed.",AppHelper.formatException(exception),false); } }; You can see on the User Pool console that the user has been successfully signed up but not confirmed yet: Confirmation code request: After successfully signing up, the user needs to confirm the code for sign-in. The confirmation code will be sent to the user's email or phone. Sometimes it may automatically confirm the user by triggering a Lambda function. If you selected automatic verification when you created the User Pool, it will send the confirmation code to your email or phone. You can let the user know where they will get the confirmation code from the cognitoUserCodeDeliveryDetails object. It will indicate where you will send the confirmation code: VerificationHandler resendConfCodeHandler = new VerificationHandler() { @Override public void onSuccess(CognitoUserCodeDeliveryDetails details) { showDialogMessage("Confirmation code sent.","Code sent to "+details.getDestination()+" via "+details.getDeliveryMedium()+".", false); } @Override public void onFailure(Exception exception) { showDialogMessage("Confirmation code request has failed", AppHelper.formatException(exception), false); } }; In this case, the user will receive an email with the confirmation code: The user can complete the sign-up process after entering the valid confirmation code. To confirm the user, you need to call the GenericHandler. AWS SDK uses this GenericHandler to communicate the result of the confirmation API: GenericHandler confHandler = new GenericHandler() { @Override public void onSuccess() { showDialogMessage("Success!",userName+" has been confirmed!", true); } @Override public void onFailure(Exception exception) { showDialogMessage("Confirmation failed", exception, false); } }; Once the user confirms, it will be updated in the Amazon Cognito console: Sign in user to the app: You must create an authentication callback handler for the user to sign in to your application. The following code will show you how the interaction happens from your app and SDK: // call Authentication Handler for User sign-in process. AuthenticationHandler authHandler = new AuthenticationHandler() { @Override public void onSuccess(CognitoUserSession cognitoUserSession) { launchUser(); // call Authentication Handler for User sign-in process. AuthenticationHandler authHandler = new AuthenticationHandler() { @Override public void onSuccess(CognitoUserSession cognitoUserSession) { launchUser(); } @Override public void getAuthenticationDetails(AuthenticationContinuation continuation, String username) { // Get user sign-in credential information from API. AuthenticationDetails authDetails = new AuthenticationDetails(username, password, null); // Send this user sign-in information for continuation continuation.setAuthenticationDetails(authDetails); // Allow user sign-in process to continue continuation.continueTask(); } @Override public void getMFACode(MultiFactorAuthenticationContinuation mfaContinuation) { // Get Multi-factor authentication code from user to sign-in mfaContinuation.setMfaCode(mfaVerificationCode); // Allow user sign-in process to continue mfaContinuation.continueTask(); } @Override public void onFailure(Exception e) { // User Sign-in failed. Please check the exception showDialogMessage("Sign-in failed", e); } @Override public void authenticationChallenge(ChallengeContinuation continuation) { /** You can implement Custom authentication challenge logic * here. Pass the user's responses to the continuation. */ } }; Access AWS resources from application user: A user can access AWS resource from the application by creating an AWS Cognito Federated Identity Pool and associating an existing User Pool with that Identity Pool, by specifying User Pool ID and App client id. Please see the next section (Step 5) to create the Federated Identity Pool with Cognito. Let's continue with the same application; after the user is authenticated, add the user's identity token to the logins map in the credential provider. The provider name depends on the Amazon Cognito User Pool ID and it should have the following structure: cognito-idp.<USER_POOL_REGION>.amazonaws.com/<USER_POOL_ID> For this example, it will be: cognito-idp.us-east-1.amazonaws.com/us-east-1_XUGRPHAWA. Now, in your credential provider, pass the ID token that you get after successful authentication: // After successful authentication get id token from // CognitoUserSession String idToken = cognitoUserSession.getIdToken().getJWTToken(); // Use an existing credential provider or create new CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(context, IDENTITY_POOL_ID, REGION); // Credentials provider setup Map<String, String> logins = new HashMap<String, String>(); logins.put("cognito-idp.us-east-1.amazonaws.com/us-east-1_ XUGRPHAWA", idToken); credentialsProvider.setLogins(logins); You can use this credential provider to access AWS services, such as Amazon DynamoDB, as follows: AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient(credentialsProvider) You have to provide the specific IAM permission to access AWS services, such as DynamoDB. You can add this permission to the Federated Identities, as mentioned in the following Step 6, by editing the View Policy Document. Once you have attached the appropriate policy, for example, AmazonDynamoDBFullAccess, for this application, you can perform the operations such as create, read, update, and delete operations in DynamoDB. Now, we will look at how to create the Amazon Cognito Federated Identities. Amazon Cognito Federated Identities Amazon Cognito Federated Identities enables you to create unique identities for the user and, authenticate with Federated Identity Providers. With this identity, the user will get temporary, limited-privilege AWS credentials. With these credentials, the user can synchronize their data with Amazon Cognito Sync or securely access other AWS services such as Amazon S3, Amazon DynamoDB, and Amazon API Gateway. It supports Federated Identity providers such as Twitter, Amazon, Facebook, Google, OpenID Connect providers, or SAML identity providers, unauthenticated identities. It also supports developer-authenticated identities from which you can register and authenticate the users through your own backend authentication systems. You need to create an Identity Pool to use Amazon Cognito Federated Identities in your application. This Identity Pool is specific for your account to store user identity data. Creating a new Identity Pool from the console Please perform the following steps to create a new Identity Pool from the console: Log in to the AWS Management console and select the Amazon Cognito Service. It will show you two options: Manage your User Pools and Manage Federated Identities. Select Manage Federated Identities. It will navigate you to the Create new identity pool screen. Enter a unique name for the Identity pool name: You can enable unauthenticated identities by selecting Enable access to unauthenticated identities from the collapsible section: Under Authentication providers, you can allow your users to authenticate using any of the authentication methods. Click on Create pool. You must select at least one identity from Authentication providers to create a valid Identity Pool. Here Cognito has been selected for a valid Authentication provider by adding User Pool ID and App client id: It will navigate to the next screen to create a new IAM role by default, to provide limited permission to end users. These permissions are for Cognito Sync and Mobile Analytics but you can edit policy documents to add/update permissions for more services. It will create two IAM roles. One for authenticated users that are supported by identity providers and another for unauthenticated users, known as guest users. Click Allow to generate the Identity Pool: Once the Identity Pool is generated, it will navigate to the Getting started with Amazon Cognito screen for that Identity Pool. Here, it will provide you with downloadable AWS SDK for different platforms such as Android, iOS - Objective C, iOS - Swift, JavaScript, Unity, Xamarin, and .NET. It also provides sample code for Get AWS Credentials and Store User Data: You have created Amazon Cognito Federated Identities. We looked at how user authentication process in AWS Cognito works with User Pools and Federated Identities. If you found this post useful, check out the book 'Expert AWS Development' to learn other concepts such as Amazon Cognito sync, traditional web hosting etc, in AWS development. Keep your serverless AWS applications secure [Tutorial] Amazon Neptune, AWS’ cloud graph database, is now generally available How to start using AWS
Read more
  • 0
  • 3
  • 30392

article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 30384

article-image-gearing-bootstrap-4
Packt
12 Sep 2016
28 min read
Save for later

Gearing Up for Bootstrap 4

Packt
12 Sep 2016
28 min read
In this article by Benjamin Jakobus and Jason Marah, the authors of the book Mastering Bootstrap 4, we will be discussing the key points about Bootstrap as a web development framework that helps developers build web interfaces. Originally conceived at Twitter in 2011 by Mark Otto and Jacob Thornton, the framework is now open source and has grown to be one of the most popular web development frameworks to date. Being freely available for private, educational, and commercial use meant that Bootstrap quickly grew in popularity. Today, thousands of organizations rely on Bootstrap, including NASA, Walmart, and Bloomberg. According to BuiltWith.com, over 10% of the world's top 1 million websites are built using Bootstrap (http://trends.builtwith.com/docinfo/Twitter-Bootstrap). As such, knowing how to use Bootstrap will be an important skill and serve as a powerful addition to any web developer’s tool belt. (For more resources related to this topic, see here.) The framework itself consists of a mixture of JavaScript and CSS, and provides developers with all the essential components required to develop a fully functioning web user interface. Over the course of the book, we will be introducing you to all of the most essential features that Bootstrap has to offer by teaching you how to use the framework to build a complete website from scratch. As CSS and HTML alone are already the subject of entire books in themselves, we assume that you, the reader, has at least a basic knowledge of HTML, CSS, and JavaScript. We begin this article by introducing you to our demo website—MyPhoto. This website will accompany us throughout the book, and serve as a practical point of reference. Therefore, all lessons learned will be taught within the context of MyPhoto. We will then discuss the Bootstrap framework, listing its features and contrasting the current release to the last major release (Bootstrap 3). Last but not least, this article will help you set up your development environment. To ensure equal footing, we will guide you towards installing the right build tools, and precisely detail the various ways in which you can integrate Bootstrap into a project. To summarize, this article will do the following: Introduce you to what exactly we will be doing Explain what is new in the latest version of Bootstrap, and how the latest version differs to the previous major release Show you how to include Bootstrap in our web project Introducing our demo project The book will teach you how to build a complete Bootstrap website from scratch. We will build and improve the website's various sections as we progress through the book. The concept behind our website is simple. To develop a landing page for photographers. Using this landing page, (hypothetical) users will be able to exhibit their wares and services. While building our website, we will be making use of the same third-party tools and libraries that you would if you were working as a professional software developer. We chose these tools and plugins specifically because of their widespread use. Learning how to use and integrate them will save you a lot of work when developing websites in the future. Specifically, the tools that we will use to assist us throughout the development of MyPhoto are Bower, node package manager (npm) and Grunt. From a development perspective, the construction of MyPhoto will teach you how to use and apply all of the essential user interface concepts and components required to build a fully functioning website. Among other things, you will learn how to do the following: Use the Bootstrap grid system to structure the information presented on your website. Create a fixed, branded, navigation bar with animated scroll effects. Use an image carousel for displaying different photographs, implemented using Bootstrap's carousel.js and jumbotron (jumbotron is a design principle for displaying important content). It should be noted that carousels are becoming an increasingly unpopular design choice, however, they are still heavily used and are an important feature of Bootstrap. As such, we do not argue for or against the use of carousels as their effectiveness depends very much on how they are used, rather than on whether they are used. Build custom tabs that allow users to navigate across different contents. Use and apply Bootstrap's modal dialogs. Apply a fixed page footer. Create forms for data entry using Bootstrap's input controls (text fields, text areas, and buttons) and apply Bootstrap's input validation styles. Make best use of Bootstrap's context classes. Create alert messages and learn how to customize them. Rapidly develop interactive data tables for displaying product information. How to use drop-down menus, custom fonts, and icons. In addition to learning how to use Bootstrap 4, the development of MyPhoto will introduce you to a range of third-party libraries such as Scrollspy (for scroll animations), SalvattoreJS (a library for complementing our Bootstrap grid), Animate.css (for beautiful CSS animations, such as fade-in effects at https://daneden.github.io/animate.css/) and Bootstrap DataTables (for rapidly displaying data in tabular form). The website itself will consist of different sections: A Welcome section An About section A Services section A Gallery section A Contact Us section The development of each section is intended to teach you how to use a distinct set of features found in third-party libraries. For example, by developing the Welcome section, you will learn how to use Bootstrap's jumbotron and alert dialogs along with different font and text styles, while the About section will show you how to use cards. The Services section of our project introduces you to Bootstrap's custom tabs. That is, you will learn how to use Bootstrap's tabs to display a range of different services offered by our website. Following on from the Services section, you will need to use rich imagery to really show off the website's sample services. You will achieve this by really mastering Bootstrap's responsive core along with Bootstrap's carousel and third-party jQuery plugins. Last but not least, the Contact Us section will demonstrate how to use Bootstrap's form elements and helper functions. That is, you will learn how to use Bootstrap to create stylish HTML forms, how to use form fields and input groups, and how to perform data validation. Finally, toward the end of the book, you will learn how to optimize your website, and integrate it with the popular JavaScript frameworks AngularJS (https://angularjs.org/) and React (http://facebook.github.io/react/). As entire books have been written on AngularJS alone, we will only cover the essentials required for the integration itself. Now that you have glimpsed a brief overview of MyPhoto, let’s examine Bootstrap 4 in more detail, and discuss what makes it so different to its predecessor. Take a look at the following screenshot: Figure 1.1: A taste of what is to come: the MyPhoto landing page. What Bootstrap 4 Alpha 4 has to offer Much has changed since Twitter’s Bootstrap was first released on August 19th, 2011. In essence, Bootstrap 1 was a collection of CSS rules offering developers the ability to lay out their website, create forms, buttons, and help with general appearance and site navigation. With respect to these core features, Bootstrap 4 Alpha 4 is still much the same as its predecessors. In other words, the framework's focus is still on allowing developers to create layouts, and helping to develop a consistent appearance by providing stylings for buttons, forms, and other user interface elements. How it helps developers achieve and use these features however, has changed entirely. Bootstrap 4 is a complete rewrite of the entire project, and, as such, ships with many fundamental differences to its predecessors. Along with Bootstrap's major features, we will be discussing the most striking differences between Bootstrap 3 and Bootstrap 4 in the sub sections below. Layout Possibly the most important and widely used feature is Bootstrap's ability to lay out and organize your page. Specifically, Bootstrap offers the following: Responsive containers. Responsive breakpoints for adjusting page layout in response to differing screen sizes. A 12 column grid layout for flexibly arranging various elements on your page. Media objects that act as building blocks and allow you to build your own structural components. Utility classes that allow you to manipulate elements in a responsive manner. For example, you can use the layout utility classes to hide elements, depending on screen size. Content styling Just like its predecessor, Bootstrap 4 overrides the default browser styles. This means that many elements, such as lists or headings, are padded and spaced differently. The majority of overridden styles only affect spacing and positioning, however, some elements may also have their border removed. The reason behind this is simple. To provide users with a clean slate upon which they can build their site. Building on this clean slate, Bootstrap 4 provides styles for almost every aspect of your webpage such as buttons (Figure 1.2), input fields, headings, paragraphs, special inline texts, such as keyboard input (Figure 1.3), figures, tables, and navigation controls. Aside from this, Bootstrap offers state styles for all input controls, for example, styles for disabled buttons or toggled buttons. Take a look at the following screenshot: Figure 1.2: The six button styles that come with Bootstrap 4 are btn-primary,btn-secondary, btn-success,btn-danger, btn-link,btn-info, and btn-warning. Take a look at the following screenshot: Figure 1.3: Bootstrap's content styles. In the preceding example, we see inline styling for denoting keyboard input. Components Aside from layout and content styling, Bootstrap offers a large variety of reusable components that allow you to quickly construct your website's most fundamental features. Bootstrap's UI components encompass all of the fundamental building blocks that you would expect a web development toolkit to offer: Modal dialogs, progress bars, navigation bars, tooltips, popovers, a carousel, alerts, drop-down menu, input groups, tabs, pagination, and components for emphasizing certain contents. Let's have a look at the following modal dialog screenshot: Figure 1.4: Various Bootstrap 4 components in action. In the screenshot above we see a sample modal dialog, containing an info alert, some sample text, and an animated progress bar. Mobile support Similar to its predecessor, Bootstrap 4 allows you to create mobile friendly websites without too much additional development work. By default, Bootstrap is designed to work across all resolutions and screen sizes, from mobile, to tablet, to desktop. In fact, Bootstrap's mobile first design philosophy implies that its components must display and function correctly at the smallest screen size possible. The reasoning behind this is simple. Think about developing a website without consideration for small mobile screens. In this case, you are likely to pack your website full of buttons, labels, and tables. You will probably only discover any usability issues when a user attempts to visit your website using a mobile device only to find a small webpage that is crowded with buttons and forms. At this stage, you will be required to rework the entire user interface to allow it to render on smaller screens. For precisely this reason, Bootstrap promotes a bottom-up approach, forcing developers to get the user interface to render correctly on the smallest possible screen size, before expanding upwards. Utility classes Aside from ready-to-go components, Bootstrap offers a large selection of utility classes that encapsulate the most commonly needed style rules. For example, rules for aligning text, hiding an element, or providing contextual colors for warning text. Cross-browser compatibility Bootstrap 4 supports the vast majority of modern browsers, including Chrome, Firefox, Opera, Safari, Internet Explorer (version 9 and onwards; Internet Explorer 8 and below are not supported), and Microsoft Edge. Sass instead of Less Both Less and Sass (Syntactically Awesome Stylesheets) are CSS extension languages. That is, they are languages that extend the CSS vocabulary with the objective of making the development of many, large, and complex style sheets easier. Although Less and Sass are fundamentally different languages, the general manner in which they extend CSS is the same—both rely on a preprocessor. As you produce your build, the preprocessor is run, parsing the Less/Sass script and turning your Less or Sass instructions into plain CSS. Less is the official Bootstrap 3 build, while Bootstrap 4 has been developed from scratch, and is written entirely in Sass. Both Less and Sass are compiled into CSS to produce a single file, bootstrap.css. It is this CSS file that we will be primarily referencing throughout this book (with the exception of Chapter 3, Building the Layout). Consequently, you will not be required to know Sass in order to follow this book. However, we do recommend that you take a 20 minute introductory course on Sass if you are completely new to the language. Rest assured, if you already know CSS, you will not need more time than this. The language's syntax is very close to normal CSS, and its elementary concepts are similar to those contained within any other programming language. From pixel to root em Unlike its predecessor, Bootstrap 4 no longer uses pixel (px) as its unit of typographic measurement. Instead, it primarily uses root em (rem). The reasoning behind choosing rem is based on a well known problem with px, that is websites using px may render incorrectly, or not as originally intended, as users change the size of the browser's base font. Using a unit of measurement that is relative to the page's root element helps address this problem, as the root element will be scaled relative to the browser's base font. In turn, a page will be scaled relative to this root element. Typographic units of measurement Simply put, typographic units of measurement determine the size of your font and elements. The most commonly used units of measurement are px and em. The former is an abbreviation for pixel, and uses a reference pixel to determine a font's exact size. This means that, for displays of 96 dots per inch (dpi), 1 px will equal an actual pixel on the screen. For higher resolution displays, the reference pixel will result in the px being scaled to match the display's resolution. For example, specifying a font size of 100 px will mean that the font is exactly 100 pixels in size (on a display with 96 dpi), irrespective of any other element on the page. Em is a unit of measurement that is relative to the parent of the element to which it is applied. So, for example, if we were to have two nested div elements, the outer element with a font size of 100 px and the inner element with a font size of 2 em, then the inner element's font size would translate to 200 px (as in this case 1 em = 100 px). The problem with using a unit of measurement that is relative to parent elements is that it increases your code's complexity, as the nesting of elements makes size calculations more difficult. The recently introduced rem measurement aims to address both em's and px's shortcomings by combining their two strengths—instead of being relative to a parent element, rem is relative to the page's root element. No more support for Internet Explorer 8 As was already implicit in the feature summary above, the latest version of Bootstrap no longer supports Internet Explorer 8. As such, the decision to only support newer versions of Internet Explorer was a reasonable one, as not even Microsoft itself provides technical support and updates for Internet Explorer 8 anymore (as of January 2016). Furthermore, Internet Explorer 8 does not support rem, meaning that Bootstrap 4 would have been required to provide a workaround. This in turn would most likely have implied a large amount of additional development work, with the potential for inconsistencies. Lastly, responsive website development for Internet Explorer 8 is difficult, as the browser does not support CSS media queries. Given these three factors, dropping support for this version of Internet Explorer was the most sensible path of action. A new grid tier Bootstrap's grid system consists of a series of CSS classes and media queries that help you lay out your page. Specifically, the grid system helps alleviate the pain points associated with horizontal and vertical positioning of a page's contents and the structure of the page across multiple displays. With Bootstrap 4, the grid system has been completely overhauled, and a new grid tier has been added with a breakpoint of 480 px and below. We will be talking about tiers, breakpoints, and Bootstrap's grid system extensively in this book. Bye-bye GLYPHICONS Bootstrap 3 shipped with a nice collection of over 250 font icons, free of use. In an effort to make the framework more lightweight (and because font icons are considered bad practice), the GLYPHICON set is no longer available in Bootstrap 4. Bigger text – No more panels, wells, and thumbnails The default font size in Bootstrap 4 is 2 px bigger than in its predecessor, increasing from 14 px to 16 px. Furthermore, Bootstrap 4 replaced panels, wells, and thumbnails with a new concept—cards. To readers unfamiliar with the concept of wells, a well is a UI component that allows developers to highlight text content by applying an inset shadow effect to the element to which it is applied. A panel too serves to highlight information, but by applying padding and rounded borders. Cards serve the same purpose as their predecessors, but are less restrictive as they are flexible enough to support different types of content, such as images, lists, or text. They can also be customized to use footers and headers. Take a look at the following screenshot: Figure 1.5: The Bootstrap 4 card component replaces existing wells, thumbnails, and panels. New and improved form input controls Bootstrap 4 introduces new form input controls—a color chooser, a date picker, and a time picker. In addition, new classes have been introduced, improving the existing form input controls. For example, Bootstrap 4 now allows for input control sizing, as well as classes for denoting block and inline level input controls. However, one of the most anticipated new additions is Bootstrap's input validation styles, which used to require third-party libraries or a manual implementation, but are now shipped with Bootstrap 4 (see Figure 1.6 below). Take a look at the following screenshot: Figure 1.6: The new Bootstrap 4 input validation styles, indicating the successful processing of input. Last but not least, Bootstrap 4 also offers custom forms in order to provide even more cross-browser UI consistency across input elements (Figure 1.7). As noted in the Bootstrap 4 Alpha 4 documentation, the input controls are: "built on top of semantic and accessible markup, so they're solid replacements for any default form control" – Source: http://v4-alpha.getbootstrap.com/components/forms/ Take a look at the following screenshot: Figure 1.7: Custom Bootstrap input controls that replace the browser defaults in order to ensure cross-browser UI consistency. Customization The developers behind Bootstrap 4 have put specific emphasis on customization throughout the development of Bootstrap 4. As such, many new variables have been introduced that allow for the easy customization of Bootstrap. Using the $enabled-*- Sass variables, one can now enable or disable specific global CSS preferences. Setting up our project Now that we know what Bootstrap has to offer, let us set up our project: Create a new project directory named MyPhoto. This will become our project root directory. Create a blank index.html file and insert the following HTML code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> </head> <body> <div class="alert alert-success"> Hello World! </div> </body> </html> Note the three meta tags: The first tag tells the browser that the document in question is utf-8 encoded. Since Bootstrap optimizes its content for mobile devices, the subsequent meta tag is required to help with viewport scaling. The last meta tag forces the document to be rendered using the latest document rendering mode available if viewed in Internet Explorer. Open the index.html in your browser. You should see just a blank page with the words Hello World. Now it is time to include Bootstrap. At its core, Bootstrap is a glorified CSS style sheet. Within that style sheet, Bootstrap exposes very powerful features of CSS with an easy-to-use syntax. It being a style sheet, you include it in your project as you would with any other style sheet that you might develop yourself. That is, open the index.html and directly link to the style sheet. Viewport scaling The term viewport refers to the available display size to render the contents of a page. The viewport meta tag allows you to define this available size. Viewport scaling using meta tags was first introduced by Apple and, at the time of writing, is supported by all major browsers. Using the width parameter, we can define the exact width of the user's viewport. For example, <meta name="viewport" content="width=320px"> will instruct the browser to set the viewport's width to 320 px. The ability to control the viewport's width is useful when developing mobile-friendly websites; by default, mobile browsers will attempt to fit the entire page onto their viewports by zooming out as far as possible. This allows users to view and interact with websites that have not been designed to be viewed on mobile devices. However, as Bootstrap embraces a mobile-first design philosophy, a zoom out will, in fact, result in undesired side-effects. For example, breakpoints will no longer work as intended, as they now deal with the zoomed out equivalent of the page in question. This is why explicitly setting the viewport width is so important. By writing content="width=device-width, initial-scale=1, shrink-to-fit=no", we are telling the browser the following: To set the viewport's width equal to whatever the actual device's screen width is. We do not want any zoom, initially. We do not wish to shrink the content to fit the viewport. For now, we will use the Bootstrap builds hosted on Bootstrap's official Content Delivery Network (CDN). This is done by including the following HTML tag into the head of your HTML document (the head of your HTML document refers to the contents between the <head> opening tag and the </head> closing tag): <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> Bootstrap relies on jQuery, a JavaScript framework that provides a layer of abstraction in an effort to simplify the most common JavaScript operations (such as element selection and event handling). Therefore, before we include the Bootstrap JavaScript file, we must first include jQuery. Both inclusions should occur just before the </body> closing tag: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> Note that, while these scripts could, of course, be loaded at the top of the page, loading scripts at the end of the document is considered best practice to speed up page loading times and to avoid JavaScript issues preventing the page from being rendered. The reason behind this is that browsers do not download all dependencies in parallel (although a certain number of requests are made asynchronously, depending on the browser and the domain). Consequently, forcing the browser to download dependencies early on will block page rendering until these assets have been downloaded. Furthermore, ensuring that your scripts are loaded last will ensure that once you invoke Document Object Model (DOM) operations in your scripts, you can be sure that your page's elements have already been rendered. As a result, you can avoid checks that ensure the existence of given elements. What is a Content Delivery Network? The objective behind any Content Delivery Network (CDN) is to provide users with content that is highly available. This means that a CDN aims to provide you with content, without this content ever (or rarely) becoming unavailable. To this end, the content is often hosted using a large, distributed set of servers. The BootstrapCDN basically allows you to link to the Bootstrap style sheet so that you do not have to host it yourself. Save your changes and reload the index.html in your browser. The Hello World string should now contain a green background: Figure 1.5: Our "Hello World" styled using Bootstrap 4. Now that the Bootstrap framework has been included in our project, open your browser's developer console (if using Chrome on Microsoft Windows, press Ctrl + Shift + I. On Mac OS X you can press cmd + alt + I). As Bootstrap requires another third-party library, Tether for displaying popovers and tooltips, the developer console will display an error (Figure 1.6). Take a look at the following screenshot: Figure 1.6: Chrome's Developer Tools can be opened by going to View, selecting Developer and then clicking on Developer Tools. At the bottom of the page, a new view will appear. Under the Console tab, an error will indicate an unmet dependency. Tether is available via the CloudFare CDN, and consists of both a CSS file and a JavaScript file. As before, we should include the JavaScript file at the bottom of our document while we reference Tether's style sheet from inside our document head: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/js/tether.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> </body> </html> While CDNs are an important resource, there are several reasons why, at times, using a third party CDN may not be desirable: CDNs introduce an additional point of failure, as you now rely on third-party servers. The privacy and security of users may be compromised, as there is no guarantee that the CDN provider does not inject malicious code into the libraries that are being hosted. Nor can one be certain that the CDN does not attempt to track its users. Certain CDNs may be blocked by the Internet Service Providers of users in different geographical locations. Offline development will not be possible when relying on a remote CDN. You will not be able to optimize the files hosted by your CDN. This loss of control may affect your website's performance (although typically you are more often than not offered an optimized version of the library through the CDN). Instead of relying on a CDN, we could manually download the jQuery, Tether, and Bootstrap project files. We could then copy these builds into our project root and link them to the distribution files. The disadvantage of this approach is the fact that maintaining a manual collection of dependencies can quickly become very cumbersome, and next to impossible as your website grows in size and complexity. As such, we will not manually download the Bootstrap build. Instead, we will let Bower do it for us. Bower is a package management system, that is, a tool that you can use to manage your website's dependencies. It automatically downloads, organizes, and (upon command) updates your website's dependencies. To install Bower, head over to http://bower.io/. How do I install Bower? Before you can install Bower, you will need two other tools: Node.js and Git. The latter is a version control tool—in essence, it allows you to manage different versions of your software. To install Git, head over to http://git-scm.com/and select the installer appropriate for your operating system. NodeJS is a JavaScript runtime environment needed for Bower to run. To install it, simply download the installer from the official NodeJS website: https://nodejs.org/ Once you have successfully installed Git and NodeJS, you are ready to install Bower. Simply type the following command into your terminal: npm install -g bower This will install Bower for you, using the JavaScript package manager npm, which happens to be used by, and is installed with, NodeJS. Once Bower has been installed, open up your terminal, navigate to the project root folder you created earlier, and fetch the bootstrap build: bower install bootstrap#v4.0.0-alpha.4 This will create a new folder structure in our project root: bower_components bootstrap Gruntfile.js LICENSE README.md bower.json dist fonts grunt js less package.js package.json We will explain all of these various files and directories later on in this book. For now, you can safely ignore everything except for the dist directory inside bower_components/bootstrap/. Go ahead and open the dist directory. You should see three sub directories: css fonts js The name dist stands for distribution. Typically, the distribution directory contains the production-ready code that users can deploy. As its name implies, the css directory inside dist includes the ready-for-use style sheets. Likewise, the js directory contains the JavaScript files that compose Bootstrap. Lastly, the fonts directory holds the font assets that come with Bootstrap. To reference the local Bootstrap CSS file in our index.html, modify the href attribute of the link tag that points to the bootstrap.min.css: <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> Let's do the same for the Bootstrap JavaScript file: <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> Repeat this process for both jQuery and Tether. To install jQuery using Bower, use the following command: bower install jquery Just as before, a new directory will be created inside the bower_components directory: bower_components jquery AUTHORS.txt LICENSE.txt bower.json dist sizzle src Again, we are only interested in the contents of the dist directory, which, among other files, will contain the compressed jQuery build jquery.min.js. Reference this file by modifying the src attribute of the script tag that currently points to Google's jquery.min.js by replacing the URL with the path to our local copy of jQuery: <script src="bower_components/jquery/dist/jquery.min.js"></script> Last but not least, repeat the steps already outlined above for Tether: bower install tether Once the installation completes, a similar folder structure than the ones for Bootstrap and jQuery will have been created. Verify the contents of bower_components/tether/dist and replace the CDN Tether references in our document with their local equivalent. The final index.html should now look as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" href="bower_components/tether/dist/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/tether/dist/js/tether.min.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> </body> </html> Refresh the index.html in your browser to make sure that everything works. What IDE and browser should I be using when following the examples in this book? While we recommend a JetBrains IDE or Sublime Text along with Google Chrome, you are free to use whatever tools and browser you like. Our taste in IDE and browser is subjective on this matter. However, keep in mind that Bootstrap 4 does not support Internet Explorer 8 or below. As such, if you do happen to use Internet Explorer 8, you should upgrade it to the latest version. Summary Aside from introducing you to our sample project MyPhoto, this article was concerned with outlining Bootstrap 4, highlighting its features, and discussing how this new version of Bootstrap differs to the last major release (Bootstrap 3). The article provided an overview of how Bootstrap can assist developers in the layout, structuring, and styling of pages. Furthermore, we noted how Bootstrap provides access to the most important and widely used user interface controls through the form of components that can be integrated into a page with minimal effort. By providing an outline of Bootstrap, we hope that the framework's intrinsic value in assisting in the development of modern websites has become apparent to the reader. Furthermore, during the course of the wider discussion, we highlighted and explained some important concepts in web development, such as typographic units of measurement or the definition, purpose and justification of the use of Content Delivery Networks. Last but not least, we detailed how to include Bootstrap and its dependencies inside an HTML document.
Read more
  • 0
  • 0
  • 30382

article-image-gophercon-2019-go-2-update-open-source-go-library-for-gui-support-for-webassembly-tinygo-for-microcontrollers-and-more
Fatema Patrawala
30 Jul 2019
9 min read
Save for later

GopherCon 2019: Go 2 update, open-source Go library for GUI, support for WebAssembly, TinyGo for microcontrollers and more

Fatema Patrawala
30 Jul 2019
9 min read
Last week Go programmers had a gala time learning, networking and programming at the Marriott Marquis San Diego Marina as the most awaited event GopherCon 2019 was held starting from 24th July till 27th July. GopherCon this year hit the road at San Diego with some exceptional conferences, and many exciting announcements for more than 1800 attendees from around the world. One of the attendees, Andrea Santillana Fernández, says the Go Community is growing, and doing quite well. She wrote on her blog post on the Source graph website that there are 1 million Go programmers around the world and month on month its membership keeps increasing. Indeed there is a significant growth in the Go community, so what did it have in store for the programmers at this year’s GopherCon 2019: On the road to Go 2 The major milestones for the journey to Go 2 were presented by Russ Coxx on Wednesday last week. He explained the main areas of focus for Go 2, which are as below: Error handling Russ notes that writing a program correctly without errors is hard. But writing a program correctly accounting for errors and external dependencies is much more difficult. He listed down a few errors which led in introducing error handling helpers like an optional Unwrap interface, errors.Is and errors.As in Go 1.13 version. Generics Russ spoke about Generics and said that they started exploring a new design since last year. They are working with programming language theory experts on the problem to help refine the proposal of generics code in Go. In a separate session, Ian Lance Taylor, introduced generics codes in Go. He briefly explained the need, implementation and benefits from generics for the Go language. Next, Taylor reviewed the Go contract design draft which included the addition of optional type parameters to types and functions. Taylor defined generics as “Generic programming which enables the representation of functions and data structures in a generic form, with types factored out.” Generic code is written using types, which are specified later. An unspecified type is called as type parameter. A type parameter offers support only when permitted by contracts. A generic code imparts strong basis for sharing codes and building programs. It can be compiled using an interface-based approach which optimizes time as the package is compiled only once. If a generic code is compiled multiple times, it can carry compile time cost. Ian showed a few sample codes written in Generics in Go. Dependency management In Go 2 the team wants to focus on Dependency management and explicitly refer to dependencies similar to Java. Russ explained this by giving a history of how in 2011 they introduced GOPATH to separate the distribution from the actual dependencies so that users could run multiple different distributions and to separate the concerns of the distribution from the external libraries. Then in 2015, they introduced the go vendor spec to formalize the vendor directory and simplify dependency management implementations. But in practice it did not work well. In 2016, they formed the dependency working group. This team started work on dep: a tool to reshape all the existing tools into one.The problem with dep and the vendor directory was multiple distinct incompatible versions of a dependency were represented by one import path. It is now called as the "Import Compatibility Rule". The team took what worked well and learned from VGo. VGo provides package uniqueness without breaking builds. VGo dictates different import paths for incompatible package versions. The team grouped similar packages and gave these groups a name: Modules. The VGo system is now go modules. It now integrates directly with the Go command. The challenge presented going forward is mostly around updating everything to use modules. Everything needs to be updated to work with the new conventions to work well. Tooling Finally, as a result of all these changes, they distilled and refined the Go toolchain. One of the examples of this is gopls or "Go Please". Gopls aims to create a smoother, standard interface to integrate with all editors, IDEs, continuous integration and others. Simple, portable and efficient graphical interfaces in Go Elias Naur presented Gio, a new open source Go library for writing immediate mode GUI programs that run on all the major platforms: Android, iOS/tvOS, macOS, Linux, Windows. The talk covered Gio's unusual design and how it achieves simplicity, portability and performance. Elias said, “I wanted to be able to write a GUI program in GO that I could implement only once and have it work on every platform. This, to me, is the most interesting feature of Gio.” https://twitter.com/rakyll/status/1154450455214190593 Elias also presented Scatter which is a Gio program for end-to-end encrypted messaging over email. Other features of Gio include: Immediate mode design UI state owned by program Only depends on lowest-level platform libraries Minimal dependency tree to keep things low level as possible GPU accelerated vector and text rendering It’s super efficient No garbage generated in drawing or layout code Cross platform (macOS, Linux, Windows, Android, iOS, tvOS, Webassembly) Core is 100% Go while OS-specific native interfaces are optional Gopls, new tool serves as a backend for Go editor Rebecca Stambler, mentioned in her presentation that the Go community has built many amazing tools to improve the Go developer experience. However, when a maintainer disappears or a new Go release wreaks havoc, the Go development experience becomes frustrating and complicated. To solve this issue, Rebecca revealed the details behind a new tool: gopls (pronounced as 'go please'). The tool is currently in development by the Go team and community, and it will ultimately serve as the backend for your Go editor. Below listed functionalities are expected from gopls: Show me errors, like unused variables or typos autocomplete would be nice function signature help, because we often forget While we're at it, hover-accessible "tooltip" documentation in general Help me jump to a variable that is needed to see An outline of package structure Get started with WebAssembly in Go WebAssembly in Go is here and ready to try! Although the landscape is evolving quickly, the opportunity is huge. The ability to deliver truly portable system binaries could potentially replace JavaScript in the browser. WebAssembly has the potential to finally realize the goal of being platform agnostic without having to rely on a JVM. In a session by Johan Brandhorst who introduces the technology, shows how to get started with WebAssembly and Go, discusses what is possible today and what will be possible tomorrow. As of Go 1.13, there is experimental support for WebAssembly using the JavaScript interface but as it is only experimental, using it in production is not recommended. Support for the WASI interface is not currently available but has been planned and may be available as early as in Go 1.14. Better x86 assembly generation from Go Michael McLoughlin in his presentation made the case for code generation techniques for writing x86 assembly from Go. Michael introduced assembly, assembly in Go, the use cases for when you would want to drop into assembly, and techniques for realizing speedups using assembly. He pointed out that most of the time, pure Go will be enough for 97% of programs, but there are those 3% of cases where it is warranted, and the examples he brought up were crypto, syscalls, and scientific computing. Michael then introduced a package called avo which makes high-performance Go assembly easier to write. He said that writing your assembly in Go will allow you to realize the benefits of a high level language such as code readability, the ability to create loops, variables, and functions, and parameterized code generation all while still realizing the benefits of writing assembly. Michael concluded the talk with his ideas for the future of avo. Use avo in projects specifically in large crypto implementations. More architecture support Possibly make avo an assembler itself (these kinds of techniques are used in JIT compilers) avo based libraries (avo/std/math/big, avo/std/crypto) The audience appreciated this talk on Twitter. https://twitter.com/darethas/status/1155336268076576768 The presentation slides for this are available on the blog. Miniature version of Golang, TinyGo for microcontrollers Ron Evans, creator of GoCV, GoBot and "technologist for hire" introduced TinyGo that can run directly on microcontrollers like Arduino and more. TinyGo uses the LLVM compiler toolchain to create native code that can run directly even on the smallest of computing devices. Ron demonstrated how Go code can be run on embedded systems using TinyGo, a compiler intended for use in microcontrollers, WebAssembly (WASM), and command-line tools. Evans began his presentation by countering the idea that Go, while fast, produces executables too large to run on the smallest computers. While that may be true of the standard Go compiler, TinyGo produces much smaller outputs. For example: "Hello World" program compiled using Go 1.12 => 1.1 MB Same program compiled using TinyGo 0.7.0 => 12 KB TinyGo currently lacks support for the full Go language and Go standard library. For example, TinyGo does not have support for the net package, although contributors have created implementations of interfaces that work with the WiFi chip built into Arduino chips. Support for Go Routines is also limited, although simple programs usually work. Evans demonstrated that despite some limitations, thanks to TinyGo, the Go language can still be run in embedded systems. Salvador Evans, son of Ron Evans, assisted him for this demonstration. At age 11, he has become the youngest GopherCon speaker so far. https://twitter.com/erikstmartin/status/1155223328329625600 There were talks by other speakers on topics like, improvements in VSCode for Golang, the first open source Golang interpreter with complete support of the language spec, Athens Project which is a proxy server in Go and how mobile development works in Go. https://twitter.com/ramyanexus/status/1155238591120805888 https://twitter.com/containous/status/1155191121938649091 https://twitter.com/hajimehoshi/status/1155184796386988035 Apart from these there were a whole lot of other talks which happened at the GopherCon 2019. There were live blogs posted by the attendees on various talks and till now more than 25 blogs are posted by the attendees on the Sourcegraph website. The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go introduces generic codes and a new contract draft design at GopherCon 2019 Is Golang truly community driven and does it really matter?  
Read more
  • 0
  • 0
  • 30379
article-image-emmanuel-tsukerman-on-why-a-malware-solution-must-include-a-machine-learning-component
Savia Lobo
30 Dec 2019
11 min read
Save for later

Emmanuel Tsukerman on why a malware solution must include a machine learning component

Savia Lobo
30 Dec 2019
11 min read
Machine learning is indeed the tech of present times! Security, which is a growing concern for many organizations today and machine learning is one of the solutions to deal with it. ML can help cybersecurity systems analyze patterns and learn from them to help prevent similar attacks and respond to changing behavior. To know more about machine learning and its application in Cybersecurity, we had a chat with Emmanuel Tsukerman, a Cybersecurity Data Scientist and the author of Machine Learning for Cybersecurity Cookbook. The book also includes modern AI to create powerful cybersecurity solutions for malware, pentesting, social engineering, data privacy, and intrusion detection. In 2017, Tsukerman's anti-ransomware product was listed in the Top 10 ransomware products of 2018 by PC Magazine. In his interview, Emmanuel talked about how ML algorithms help in solving problems related to cybersecurity, and also gave a brief tour through a few chapters of his book. He also touched upon the rise of deepfakes and malware classifiers. On using machine learning for cybersecurity Using Machine learning in Cybersecurity scenarios will enable systems to identify different types of attacks across security layers and also help to take a correct POA. Can you share some examples of the successful use of ML for cybersecurity you have seen recently? A recent and interesting development in cybersecurity is that the bad guys have started to catch up with technology; in particular, they have started utilizing Deepfake tech to commit crime; for example,they have used AI to imitate the voice of a CEO in order to defraud a company of $243,000. On the other hand, the use of ML in malware classifiers is rapidly becoming an industry standard, due to the incredible number of never-before-seen samples (over 15,000,000) that are generated each year. On staying updated with developments in technology to defend against attacks Machine learning technology is not only used by ethical humans, but also by Cybercriminals who use ML for ML-based intrusions. How can organizations counter such scenarios and ensure the safety of confidential organizational/personal data? The main tools that organizations have at their disposal to defend against attacks are to stay current and to pentest. Staying current, of course, requires getting educated on the latest developments in technology and its applications. For example, it’s important to know that hackers can now use AI-based voice imitation to impersonate anyone they would like. This knowledge should be propagated in the organization so that individuals aren’t caught off-guard. The other way to improve one’s security is by performing regular pen tests using the latest attack methodology; be it by attempting to avoid the organization’s antivirus, sending phishing communications, or attempting to infiltrate the network. In all cases, it is important to utilize the most dangerous techniques, which are often ML-based On how ML algorithms and GANs help in solving cybersecurity problems In your book, you have mentioned various algorithms such as clustering, gradient boosting, random forests, and XGBoost. How do these algorithms help in solving problems related to cybersecurity? Unless a machine learning model is limited in some way (e.g., in computation, in time or in training data), there are 5 types of algorithms that have historically performed best: neural networks, tree-based methods, clustering, anomaly detection and reinforcement learning (RL). These are not necessarily disjoint, as one can, for example, perform anomaly detection via neural networks. Nonetheless, to keep it simple, let’s stick to these 5 classes. Neural networks shine with large amounts of data on visual, auditory or textual problems. For that reason, they are used in Deepfakes and their detection, lie detection and speech recognition. Many other applications exist as well. But one of the most interesting applications of neural networks (and deep learning) is in creating data via Generative adversarial networks (GANs). GANs can be used to generate password guesses and evasive malware. For more details, I’ll refer you to the Machine Learning for Cybersecurity Cookbook. The next class of models that perform well are tree-based. These include Random Forests and gradient boosting trees. These perform well on structured data with many features. For example, the PE header of PE files (including malware) can be featurized, yielding ~70 numerical features. It is convenient and effective to construct an XGBoost model (a gradient-boosting model) or a Random Forest model on this data, and the odds are good that performance will be unbeatable by other algorithms. Next there is clustering. Clustering shines when you would like to segment a population automatically. For example, you might have a large collection of malware samples, and you would like to classify them into families. Clustering is a natural choice for this problem. Anomaly detection lets you fight off unseen and unknown threats. For instance, when a hacker utilizes a new tactic to intrude on your network, an anomaly detection algorithm can protect you even if this new tactic has not been documented. Finally, RL algorithms perform well on dynamic problems. The situation can be, for example, a penetration test on a network. The DeepExploit framework, covered in the book, utilizes an RL agent on top of metasploit to learn from prior pen tests and becomes better and better at finding vulnerabilities. Generative Adversarial Networks (GANs) are a popular branch of ML used to train systems against counterfeit data. How can these help in malware detection and safeguarding systems to identify correct intrusion? A good way to think about GANs is as a pair of neural networks, pitted against each other. The loss of one is the objective of the other. As the two networks are trained, each becomes better and better at its job. We can then take whichever side of the “tug of war” battle, separate it from its rival, and use it. In other cases, we might choose to “freeze” one of the networks, meaning that we do not train it, but only use it for scoring. In the case of malware, the book covers how to use MalGAN, which is a GAN for malware evasion. One network, the detector, is frozen. In this case, it is an implementation of MalConv. The other network, the adversarial network, is being trained to modify malware until the detection score of MalConv drops to zero. As it trains, it becomes better and better at this. In a practical situation, we would want to unfreeze both networks. Then we can take the trained detector, and use it as part of our anti-malware solution. We would then be confident knowing that it is very good at detecting evasive malware. The same ideas can be applied in a range of cybersecurity contexts, such as intrusion and deepfakes. On how Machine Learning for Cybersecurity Cookbook can help with easy implementation of ML for Cybersecurity problems What are some of the tools/ recipes mentioned in your book that can help cybersecurity professionals to easily implement machine learning and make it a part of their day-to-day activities? The Machine Learning for Cybersecurity Cookbook offers an astounding 80+ recipes. Themost applicable recipes will vary between individual professionals, and even for each individual different recipes will be applicable at different times in their careers. For a cybersecurity professional beginning to work with malware, the fundamentals chapter, chapter 2:ML-based Malware Detection, provides a solid and excellent start to creating a malware classifier. For more advanced malware analysts, Chapter 3:Advanced Malware Detection will offer more sophisticated and specialized techniques, such as dealing with obfuscation and script malware. Every cybersecurity professional would benefit from getting a firm grasp of chapter 4, “ML for Social Engineering”. In fact, anyone at all should have an understanding of how ML can be used to trick unsuspecting users, as part of their cybersecurity education. This chapter really shows that you have to be cautious because machines are becoming better at imitating humans. On the other hand, ML also provides the tools to know when such an attack is being performed. Chapter 5, “Penetration Testing Using ML” is a technical chapter, and is most appropriate to cybersecurity professionals that are concerned with pen testing. It covers 10 ways in which pen testing can be improved by using ML, including neural network-assisted fuzzing and DeepExploit, a framework that utilizes a reinforcement learning (RL) agent on top of metasploit to perform automatic pen testing. Chapter 6, “Automatic Intrusion Detection” has a wider appeal, as a lot of cybersecurity professionals have to know how to defend a network from intruders. They would benefit from seeing how to leverage ML to stop zero-day attacks on their network. In addition, the chapter covers many other use cases, such as spam filtering, Botnet detection and Insider Threat detection, which are more useful to some than to others. Chapter 7, “Securing and Attacking Data with ML” provides great content to cybersecurity professionals interested in utilizing ML for improving their password security and other forms of data security. Chapter 8, “Secure and Private AI”, is invaluable to data scientists in the field of cybersecurity. Recipes in this chapter include Federated Learning and differential privacy (which allow to train an ML model on clients’ data without compromising their privacy) and testing adversarial robustness (which allows to improve the robustness of ML models to adversarial attacks). Your book talks about using machine learning to generate custom malware to pentest security. Can you elaborate on how this works and why this matters? As a general rule, you want to find out your vulnerabilities before someone else does (who might be up to no-good). For that reason, pen testing has always been an important step in providing security. To pen test your Antivirus well, it is important to use the latest techniques in malware evasion, as the bad guys will certainly try them, and these are deep learning-based techniques for modifying malware. On Emmanuel’s personal achievements in the Cybersecurity domain Dr. Tsukerman, in 2017, your anti-ransomware product was listed in the ‘Top 10 ransomware products of 2018’ by PC Magazine. In your experience, why are ransomware attacks on the rise and what makes an effective anti-ransomware product? Also, in 2018,  you designed an ML-based, instant-verdict malware detection system for Palo Alto Networks' WildFire service of over 30,000 customers. Can you tell us more about this project? If you monitor cybersecurity news, you would see that ransomware continues to be a huge threat. The reason is that ransomware offers cybercriminals an extremely attractive weapon. First, it is very difficult to trace the culprit from the malware or from the crypto wallet address. Second, the payoffs can be massive, be it from hitting the right target (e.g., a HIPAA compliant healthcare organization) or a large number of targets (e.g., all traffic to an e-commerce web page). Thirdly, ransomware is offered as a service, which effectively democratizes it! On the flip side, a lot of the risk of ransomware can be mitigated through common sense tactics. First, backing up one’s data. Second, having an anti-ransomware solution that provides guarantees. A generic antivirus can provide no guarantee - it either catches the ransomware or it doesn’t. If it doesn’t, your data is toast. However, certain anti-ransomware solutions, such as the one I have developed, do offer guarantees (e.g., no more than 0.1% of your files lost). Finally, since millions of new ransomware samples are developed each year, the malware solution must include a machine learning component, to catch the zero-day samples, which is another component of the anti-ransomware solution I developed. The project at Palo Alto Networks is a similar implementation of ML for malware detection. The one difference is that unlike the anti-ransomware service, which is an endpoint security tool, it offers protection services from the cloud. Since Palo Alto Networks is a firewall-service provider, that makes a lot of sense, since ideally, the malicious sample will be stopped at the firewall, and never even reach the endpoint. To learn how to implement the techniques discussed in this interview, grab your copy of the Machine Learning for Cybersecurity Cookbook Don’t wait - the bad guys aren’t waiting. Author Bio Emmanuel Tsukerman graduated from Stanford University and obtained his Ph.D. from UC Berkeley. In 2017, Dr. Tsukerman's anti-ransomware product was listed in the Top 10 ransomware products of 2018 by PC Magazine. In 2018, he designed an ML-based, instant-verdict malware detection system for Palo Alto Networks' WildFire service of over 30,000 customers. In 2019, Dr. Tsukerman launched the first cybersecurity data science course. About the book Machine Learning for Cybersecurity Cookbook will guide you through constructing classifiers and features for malware, which you'll train and test on real samples. You will also learn to build self-learning, reliant systems to handle cybersecurity tasks such as identifying malicious URLs, spam email detection, intrusion detection, network protection, and tracking user and process behavior, and much more! DevSecOps and the shift left in security: how Semmle is supporting software developers [Podcast] Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition Businesses are confident in their cybersecurity efforts, but weaknesses prevail
Read more
  • 0
  • 0
  • 30378

Packt
14 Oct 2015
10 min read
Save for later

Mastering Ansible – Protecting Your Secrets with Ansible

Packt
14 Oct 2015
10 min read
In this In this article by Jesse Keating, author of the book Mastering Ansible, we will see how to how to encrypt data at rest using Ansible. Secrets are meant to stay secret. Whether they are login credentials to a cloud service or passwords to database resources, they are secret for a reason. Should they fall into the wrong hands, they can be used to discover trade secrets and private customer data, create infrastructure for nefarious purposes, or worse. All of which could cost you or your organization a lot of time and money and cause headache! In this article, we cover how to keep your secrets safe with Ansible. Encrypting data at rest Protecting secrets while operating (For more resources related to this topic, see here.) Encrypting data at rest As a configuration management system or an orchestration engine, Ansible has great power. In order to wield that power, it is necessary to entrust secret data to Ansible. An automated system that prompts the operator for passwords all the time is not very efficient. To maximize the power of Ansible, secret data has to be written to a file that Ansible can read and utilize the data from within. This creates a risk though! Your secrets are sitting there on your filesystem in plain text. This is a physical and digital risk. Physically, the computer could be taken from you and pawed through for secret data. Digitally, any malicious software that can break the boundaries set upon it could read any data your user account has access to. If you utilize a source control system, the infrastructure that houses the repository is just as much at risk. Thankfully, Ansible provides a facility to protect your data at rest. That facility is Vault, which allows for encrypting text files so that they are stored "at rest" in encrypted format. Without the key or a significant amount of computing power, the data is indecipherable. The key lessons to learn while dealing with encrypting data at rest are: Valid encryption targets Creating new encrypted files Encrypting existing unencrypted files Editing encrypted files Changing the encryption password on files Decrypting encrypted files Running the ansible-playbook command to reference encrypted files Things Vault can encrypt Vault can be used to encrypt any structured data file used by Ansible. This is essentially any YAML (or JSON) file that Ansible uses during its operation. This can include: group_vars/ files host_vars/ files include_vars targets vars_files targets --extra-vars targets role variables Role defaults Task files Handler files If the file can be expressed in YAML and read by Ansible, it is a valid file to encrypt with Vault. Because the entire file will be unreadable at rest, care should be taken to not be overzealous in picking which files to encrypt. Any source control operations with the files will be done with the encrypted content, making it very difficult to peer review. As a best practice, the smallest amount of data possible should be encrypted; this may even mean moving some variables into a file all by themselves. Creating new encrypted files To create new files, Ansible provides a new program, ansible-vault. This program is used to create and interact with Vault encrypted files. The subroutine to create encrypted files is the create subroutine. Lets have a look at the following screenshot: To create a new file, you'll need to know two things ahead of time. The first is the password Vault should use to encrypt the file, and the second is the file name. Once provided with this information, ansible-vault will launch a text editor, whichever editor is defined in the environment variable EDITOR. Once you save the file and exit the editor, ansible-vault will use the supplied password as a key to encrypt the file with the AES256 cipher. All Vault encrypted files referenced by a playbook need to be encrypted with the same key or ansible-playbook will be unable to read them. The ansible-vault program will prompt for a password unless the path to a file is provided as an argument. The password file can either be a plain text file with the password stored as a single line, or it can be an executable file that outputs the password as a single line to standard out. Let's walk through a few examples of creating encrypted files. First, we'll create one and be prompted for a password; then we will provide a password file; and finally we'll create an executable to deliver the password. The password prompt On opening the editor asks for the passphrase, as shown in the following screenshot: Once the passphrase is confirmed, our editor opens and we're able to put content into the file: On my system, the configured editor is vim. Your system may be different, and you may need to set your preferred editor as the value for the EDITOR environment variable. Now we save the file. If we try to read the contents, we'll see that they are in fact encrypted, with a small header hint for Ansible to use later: The password file In order to use ansible-vault with a password file, we first need to create the password file. Simply echoing a password in a file can do this. Then we can reference this file while calling ansible-vault to create another encrypted file: Just as with being prompted for a password, the editor will open and we can write out our data. The password script This last example uses a password script. This is useful for designing a system in which a password can be stored in a central system for storing credentials and shared with contributors to the playbook tree. Each contributor could have their own password to the shared credentials store, from where the Vault password can be retrieved. Our example will be far simpler: just simple output to standard out with a password. This file will be saved as password.sh. The file needs to be marked as an executable for Ansible to treat it as such. Lets have a look at the following screenshot: Encrypting existing files The previous examples all dealt with creating new encrypted files using the create subroutine. But what if we want to take an established file and encrypt it? A subroutine exists for this as well. It is named encrypt and is outlined in the following screenshot: As with create, encrypt expects a password (or password file) and the path to a file. In this case, however, the file must already exist. Let's demonstrate this by encrypting an existing file, a_vars_file.yaml: We can see the file contents before and after the call to encrypt. After the call, the contents are indeed encrypted. Unlike the create subroutine, encrypt can operate on multiple files, making it easy to protect all the important data in one action. Simply list all the files to be encrypted, separated by spaces. Attempting to encrypt already encrypted files will result in an error. Editing encrypted files Once a file has been encrypted with ansible-vault, it cannot be edited directly. Opening the file in an editor would result in the encrypted data being shown. Making any changes to the file would damage the file and Ansible would be unable to read the contents correctly. We need a subroutine that will first decrypt the contents of the file, allow us to edit these contents, and then encrypt the new contents before saving it back to the file. Such a subroutine exists and is called edit. Here is a screenshot showing the available switches: All our familiar options are back, an optional password file/script and the file to edit. If we edit the file we just encrypted, we'll notice that ansible-vault opens our editor with a temporary file as the file path: The editor will save this and ansible-vault will then encrypt it and move it to replace the original file as shown in the following screenshot: Password rotation for encrypted files Over time, as contributors come and go, it is a good idea to rotate the password used to encrypt your secrets. Encryption is only as good as the other layers of protection of the password. ansible-vault provides a subroutine, named rekey, that allows us to change the password as shown here: The rekey subroutine operates much like the edit subroutine. It takes in an optional password file/script and one or more files to rekey. Note that while you can supply a file/script for decryption of the existing files, you cannot supply one for the new passphrase. You will be prompted to input the new passphrase. Let's rekey our even_more_secrets.yaml file: Remember that all the encrypted files need to have a matching key. Be sure to re-key all the files at the same time. Decrypting encrypted files If, at some point, the need to encrypt data files goes away, ansible-vault provides a subroutine that can be used to remove encryption for one or more encrypted files. This subroutine is (unsurprisingly) named decrypt as shown here: Once again, we have an optional argument for a password file/script and then one or more file paths to decrypt. Let's decrypt the file we created earlier using our password file: Executing ansible-playbook with Vault-encrypted files To make use of our encrypted content, we need to be able to inform ansible-playbook how to access any encrypted data it might encounter. Unlike ansible-vault, which exists solely to deal with file encryption/decryption, ansible-playbook is more general purpose and will not assume it is dealing with encrypted data by default. There are two ways to indicate that encrypted data may be encountered. The first is the argument --ask-vault-pass, which will prompt for the vault password required to unlock any encountered encrypted files at the very beginning of a playbook execution. Ansible will hold this provided password in memory for the duration of the playbook execution. The second method is to reference a password file or script via the familiar --vault-password-file argument. Let's create a simple playbook named show_me.yaml that will print out the value of the variable inside of a_vars_file.yaml, which we encrypted in a previous example: --- - name: show me an encrypted var hosts: localhost gather_facts: false vars_files: - a_vars_file.yaml tasks: - name: print the variable debug: var: something The output is as follows: Summary In this article we learnt that Ansible can deal with sensitive data. It is important to understand how this data is stored at rest and how this data is treated when utilized. With a little care and attention, Ansible can keep your secrets secret. Encrypting secrets with ansible-vault can protect them while dormant on your filesystem or in a shared source control repository. Preventing Ansible from logging task data can protect against leaking data to remote log files or on-screen displays. Resources for Article: Further resources on this subject: Blueprinting Your Infrastructure[article] Advanced Playbooks[article] Ansible – An Introduction [article]
Read more
  • 0
  • 0
  • 30375

article-image-terrifyingly-realistic-deepfake-video-of-bill-hader-transforming-into-tom-cruise-is-going-viral-on-youtube
Sugandha Lahoti
14 Aug 2019
4 min read
Save for later

Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube

Sugandha Lahoti
14 Aug 2019
4 min read
Deepfakes are becoming scaringly and indistinguishably real. A YouTube clip of Bill Hader in conversation with David Letterman on his late-night show in 2008 is going viral where Hader’s face subtly shifts to Cruise’s as Hader does his impression. This viral Deepfake clip has been viewed over 3 million times and is uploaded by Ctrl Shift Face (a Slovakian citizen who goes by the name of Tom), who has created other entertaining videos using Deepfake technology. For the unaware, Deepfake uses Artificial intelligence and deep neural networks to alter audio or video to pass it off as true or original content. https://www.youtube.com/watch?v=VWrhRBb-1Ig Deepfakes are problematic as they make it hard to differentiate between fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, political abuse, and fake celebrities videos as this one. The top comments on the video clip express dangers of realistic AI manipulation. “The fade between faces is absolutely unnoticeable and it's flipping creepy. Nice job!” “I’m always amazed with new technology, but this is scary.” “Ok, so video evidence in a court of law just lost all credibility” https://twitter.com/TheMuleFactor/status/1160925752004624387 Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. Gavin Sheridan, CEO of Vizlegal also tweeted the clip, “Imagine when this is all properly weaponized on top of already fractured and extreme online ecosystems and people stop believing their eyes and ears.” He also talked about future impact. “True videos will be called fake videos, fake videos will be called true videos. People steered towards calling news outlets "fake", will stop believing their own eyes. People who want to believe their own version of reality will have all the videos they need to support it,” he tweeted. He also tweeted whether we would require A-list movie actors at all in the future, and could choose which actor will portray what role. His tweet reads, “Will we need A-list actors in the future when we could just superimpose their faces onto the faces of other actors? Would we know the difference?  And could we not choose at the start of a movie which actors we want to play which roles?” The past year has seen accelerated growth in the use of deepfakes. In June, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. Facebook had received strong criticism for promoting fake videos on its platform when in May, the company had refused to remove a doctored video of senior politician Nancy Pelosi. Samsung researchers also released a deepfake that could animate faces with just your voice and a picture using temporal GANs. Post this, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Tom, the creator of the viral video told The Guardian that he doesn't see deepfake videos as the end of the world and hopes his deepfakes will raise public awareness of the technology's potential for misuse. “It’s an arms race; someone is creating deepfakes, someone else is working on other technologies that can detect deepfakes. I don’t really see it as the end of the world like most people do. People need to learn to be more critical. The general public are aware that photos could be Photoshopped, but they have no idea that this could be done with video.” Ctrl Shift Face is also on Patreon offering access to bonus materials, behind the scenes footage, deleted scenes, early access to videos for those who provide him monetary support. Now there is a Deepfake that can animate your face with just your voice and a picture. Mark Zuckerberg just became the target of the world’s first high profile white hat deepfake op. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts.
Read more
  • 0
  • 0
  • 30340
article-image-jailbreaking-ipad-ubuntu
Packt
20 Jul 2010
3 min read
Save for later

Jailbreaking the iPad - in Ubuntu

Packt
20 Jul 2010
3 min read
(For more resources on Ubuntu, see here.) What is jailbreaking? Jailbreaking an iPhone or iPad allows you to run unsigned code by unlocking the root account on the device. Simply, this allows you to install any software you like - without the restriction of having to be in the main Apple app store. Remember, jailbreaking is not SIM unlocking. Jailbreaking voids the Apple-supplied warranty. What does this mean for developers? The mass availability of jailbreaking for these devices allows developers to write apps without having to shell out Apple's developer fees. Previously a one-off payment of $300 US, an "official" developer must now pay $100 US each year to keep the right to develop applications. What jailbreaks are available? Arguably the most advanced jailbreak available now is called Spirit. Though unlike a few others, which can now hack iOS 4.0, Spirit differs in a few key features. Not only is Spirit the first to be able to jailbreak the iPad, but this jailbreak also allows an "untethered" jailbreak - you won't have to plug it into a computer every boot to "keep" it jailbroken. Support for jailbreaking iOS 4.0 is coming soon for Spirit. There are tutorials on jailbreaking using Spirit, like this one, but they generally skip over the fact that there's a Linux version, and only talk about Windows and/or OS X. Jailbreaking the iPad A very simple process, you can now jailbreak the iPad very quickly thanks to excellent device support and drivers in Ubuntu. Please note that from now on, you should only plug in the device to iTunes 9 before 9.2, or better still, just use Rhythmbox or gtkpod to manage your library. Install git if you haven't already got it: sudo apt-get install git Clone the Spirit repository: git clone http://github.com/posixninja/spirit-linux.git Install the dev package for libimobiledevice: sudo apt-get install libimobiledevice-dev Enter the Spirit directory and build the program: cd spirit-linux make I've noticed that though Ubuntu has excellent Apple device support, and you can mount these devices just fine, that the jailbreak program won't detect the device without iFuse. Install this first: sudo apt-get install ifuse Now for the fun! Plug in your iPad (you'll see it under the Places menu) and run the jailbreak: ./spirit You'll see output similar to this: INFO: Retriving device listINFO: Opening deviceINFO: Creating lockdownd clientINFO: Starting AFC serviceINFO: Sending files via AFC.INFO: Found version iPad1,1_3.2INFO: Read igor/map.plistINFO: Sending "install"INFO: Sending "one.dylib"INFO: Sending "freeze.tar.xz"INFO: Sending "bg.jpg"INFO: Sending files completeINFO: Creating lockdownd clientINFO: Starting MobileBackup serviceINFO: Beginning restore processINFO: Read resources/overrides.plistDEBUG: add_fileDEBUG: Data size 922:DEBUG: add_fileDEBUG: Data size 0:DEBUG: start_restoreDEBUG: Sending fileDEBUG: Sending fileINFO: Completed restoreINFO: Completed successfully The device will reboot, and if all went well, you'll see a new app called Cydia on the home screen. This is the app that allows you to install other apps. Open Cydia. Cydia will ask you to choose what kind of user you are. There's no harm in choosing Developer; you'll just see more information. Also, if you choose the bottom level (User) console packages like OpenSSH will be hidden from you. You'll also receive some updates; install them. Interestingly, Cydia uses the deb package format, just like Ubuntu: That's it! Wasn't that quick?
Read more
  • 0
  • 0
  • 30306

article-image-auditing-mobile-applications
Packt
08 Jul 2016
48 min read
Save for later

Auditing Mobile Applications

Packt
08 Jul 2016
48 min read
In this article by Prashant Verma and Akshay Dikshit, author of the book Mobile Device Exploitation Cookbook we will cover the following topics: Auditing Android apps using static analysis Auditing Android apps using a dynamic analyzer Using Drozer to find vulnerabilities in Android applications Auditing iOS application using static analysis Auditing iOS application using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Finding vulnerabilities in WAP-based mobile apps Finding client-side injection Insecure encryption in mobile apps Discovering data leakage sources Other application-based attacks in mobile devices Launching intent injection in Android (For more resources related to this topic, see here.) Mobile applications such as web applications may have vulnerabilities. These vulnerabilities in most cases are the result of bad programming practices or insecure coding techniques, or may be because of purposefully injected bad code. For users and organizations, it is important to know how vulnerable their applications are. Should they fix the vulnerabilities or keep/stop using the applications? To address this dilemma, mobile applications need to be audited with the goal of uncovering vulnerabilities. Mobile applications (Android, iOS, or other platforms) can be analyzed using static or dynamic techniques. Static analysis is conducted by employing certain text or string based searches across decompiled source code. Dynamic analysis is conducted at runtime and vulnerabilities are uncovered in simulated fashion. Dynamic analysis is difficult as compared to static analysis. In this article, we will employ both static and dynamic analysis to audit Android and iOS applications. We will also learn various other techniques to audit findings, including Drozer framework usage, WAP-based application audits, and typical mobile-specific vulnerability discovery. Auditing Android apps using static analysis Static analysis is the mostcommonly and easily applied analysis method in source code audits. Static by definition means something that is constant. Static analysis is conducted on the static code, that is, raw or decompiled source code or on the compiled (object) code, but the analysis is conducted without the runtime. In most cases, static analysis becomes code analysis via static string searches. A very common scenario is to figure out vulnerable or insecure code patterns and find the same in the entire application code. Getting ready For conducting static analysis of Android applications, we at least need one Android application and a static code scanner. Pick up any Android application of your choice and use any static analyzer tool of your choice. In this recipe, we use Insecure Bank, which is a vulnerable Android application for Android security enthusiasts. We will also use ScriptDroid, which is a static analysis script. Both Insecure Bank and ScriptDroid are coded by Android security researcher, Dinesh Shetty. How to do it... Perform the following steps: Download the latest version of the Insecure Bank application from GitHub. Decompress or unzip the .apk file and note the path of the unzipped application. Create a ScriptDroid.bat file by using the following code: @ECHO OFF SET /P Filelocation=Please Enter Location: mkdir %Filelocation%OUTPUT :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.java" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt :: Code to check for insecure usage of SharedPreferences grep -H -i -n -C2 -e "putString" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" grep -H -i -n -C2 -e "MODE_PRIVATE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTModeprivate.txt" grep -H -i -n -C2 -e "MODE_WORLD_READABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldreadable.txt" grep -H -i -n -C2 -e "MODE_WORLD_WRITEABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldwritable.txt" grep -H -i -n -C2 -e "addPreferencesFromResource" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" :: Code to check for possible TapJacking attack grep -H -i -n -e filterTouchesWhenObscured="true" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" grep -H -i -n -e "<Button" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTtapjackings.txt" grep -H -i -n -v filterTouchesWhenObscured="true" "%Filelocation%OUTPUTtapjackings.txt" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" del %Filelocation%OUTPUTTemp_tapjacking.txt :: Code to check usage of external storage card for storing information grep -H -i -n -e "WRITE_EXTERNAL_STORAGE" "%Filelocation%........AndroidManifest.xml" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "getExternalStorageDirectory()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "sdcard" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "addJavascriptInterface()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -e "setJavaScriptEnabled(true)" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_probableXss.txt" >> "%Filelocation%OUTPUTprobableXss.txt" del %Filelocation%OUTPUTTemp_probableXss.txt :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_weakencryption.txt" >> "%Filelocation%OUTPUTWeakencryption.txt" del %Filelocation%OUTPUTTemp_weakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -C3 "http://" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "HttpURLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "URLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -C3 -e "URL" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -e "TrustAllSSLSocket-Factory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "AllTrustSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "NonValidatingSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" >> "%Filelocation%OUTPUTOtherUrlConnections.txt" del %Filelocation%OUTPUTTemp_OtherUrlConnection.txt grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_overhttp.txt" >> "%Filelocation%OUTPUTUnencryptedTransport.txt" del %Filelocation%OUTPUTTemp_overhttp.txt :: Code to check for Autocomplete ON grep -H -i -n -e "<Input" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_autocomp.txt" grep -H -i -n -v "textNoSuggestions" "%Filelocation%OUTPUTTemp_autocomp.txt" >> "%Filelocation%OUTPUTAutocompleteOn.txt" del %Filelocation%OUTPUTTemp_autocomp.txt :: Code to presence of possible SQL Content grep -H -i -n -e "rawQuery" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "compileStatement" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "db" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "sqlite" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "database" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "insert" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "delete" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "select" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "table" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "cursor" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_sqlcontent.txt" >> "%Filelocation%OUTPUTSqlcontents.txt" del %Filelocation%OUTPUTTemp_sqlcontent.txt :: Code to check for Logging mechanism grep -H -i -n -F "Log." "%Filelocation%*.java" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for Information in Toast messages grep -H -i -n -e "Toast.makeText" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Toast.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Toast.txt" >> "%Filelocation%OUTPUTToast_content.txt" del %Filelocation%OUTPUTTemp_Toast.txt :: Code to check for Debugging status grep -H -i -n -e "android:debuggable" "%Filelocation%*.java" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "getLastKnownLocation()|requestLocationUpdates()|getLatitude()|getLongitude() |LOCATION" "%Filelocation%*.java" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for possible Intent Injection grep -H -i -n -C3 -e "Action.getIntent(" "%Filelocation%*.java" >> "%Filelocation%OUTPUTIntentValidation.txt" How it works... Go to the command prompt and navigate to the path where ScriptDroid is placed. Run the .bat file and it prompts you to input the path of the application for which you wish toperform static analysis. In our case we provide it with the path of the Insecure Bank application, precisely the path where Java files are stored. If everything worked correctly, the screen should look like the following: The script generates a folder by the name OUTPUT in the path where the Java files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The combination ofScriptDroid and Insecure Bank gives a very nice view of various Android vulnerabilities; usually the same is not possible with live apps. Consider the following points, for instance: Weakencryption.txt has listed down the instances of Base64 encoding used for passwords in the Insecure Bank application Logging.txt contains the list of insecure log functions used in the application SdcardStorage.txt contains the code snippet pertaining to the definitions related to data storage in SD Cards Details like these from static analysis are eye-openers in letting us know of the vulnerabilities in our application, without even running the application. There's more... Thecurrent recipe used just ScriptDroid, but there are many other options available. You can either choose to write your own script or you may use one of the free orcommercial tools. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also https://github.com/dineshshetty/Android-InsecureBankv2 Auditing iOS application using static analysis Auditing Android apps a using a dynamic analyzer Dynamic analysis isanother technique applied in source code audits. Dynamic analysis is conducted in runtime. The application is run or simulated and the flaws or vulnerabilities are discovered while the application is running. Dynamic analysis can be tricky, especially in the case of mobile platforms. As opposed to static analysis, there are certain requirements in dynamic analysis, such as the analyzer environment needs to be runtime or a simulation of the real runtime.Dynamic analysis can be employed to find vulnerabilities in Android applications which aredifficult to find via static analysis. A static analysis may let you know a password is going to be stored, but dynamic analysis reads the memory and reveals the password stored in runtime. Dynamic analysis can be helpful in tampering data in transmission during runtime that is, tampering with the amount in a transaction request being sent to the payment gateway. Some Android applications employ obfuscation to prevent attackers reading the code; Dynamic analysis changes the whole game in such cases, by revealing the hardcoded data being sent out in requests, which is otherwise not readable in static analysis. Getting ready For conducting dynamic analysis of Android applications, we at least need one Android application and a dynamic code analyzer tool. Pick up any Android application of your choice and use any dynamic analyzer tool of your choice. The dynamic analyzer tools can be classified under two categories: The tools which run from computers and connect to an Android device or emulator (to conduct dynamic analysis) The tools that can run on the Android device itself For this recipe, we choose a tool belonging to the latter category. How to do it... Perform the following steps for conducting dynamic analysis: Have an Android device with applications (to be analyzed dynamically) installed. Go to the Play Store and download Andrubis. Andrubis is a tool from iSecLabs which runs on Android devices and conducts static, dynamic, and URL analysis on the installed applications. We will use it for dynamic analysis only in this recipe. Open the Andrubis application on your Android device. It displays the applications installed on the Android device and analyzes these applications. How it works... Open the analysis of the application of your interest. Andrubis computes an overall malice score (out of 10) for the applications and gives the color icon in front of its main screen to reflect the vulnerable application. We selected anorange coloredapplication to make more sense with this recipe. This is how the application summary and score is shown in Andrubis: Let us navigate to the Dynamic Analysis tab and check the results: The results are interesting for this application. Notice that all the files going to be written by the application under dynamic analysis are listed down. In our case, one preferences.xml is located. Though the fact that the application is going to create a preferences file could have been found in static analysis as well, additionally, dynamic analysis confirmed that such a file is indeed created. It also confirms that the code snippet found in static analysis about the creation of a preferences file is not a dormant code but a file that is going to be created. Further, go ahead and read the created file and find any sensitive data present there. Who knows, luck may strike and give you a key to hidden treasure. Notice that the first screen has a hyperlink, View full report in browser. Tap on it and notice that the detailed dynamic analysis is presented for your further analysis. This also lets you understand what the tool tried and what response it got. This is shown in the following screenshot: There's more... The current recipe used a dynamic analyzer belonging to the latter category. There are many other tools available in the former category. Since this is an Android platform, many of them are open source tools. DroidBox can be tried for dynamic analysis. It looks for file operations (read/write), network data traffic, SMS, permissions, broadcast receivers, and so on, among other checks.Hooker is another tool that can intercept and modify API calls initiated from the application. This is veryuseful indynamic analysis. Try hooking and tampering with data in API calls. See also https://play.google.com/store/apps/details?id=org.iseclab.andrubis https://code.google.com/p/droidbox/ https://github.com/AndroidHooker/hooker Using Drozer to find vulnerabilities in Android applications Drozer is a mobile security audit and attack framework, maintained by MWR InfoSecurity. It is a must-have tool in the tester's armory. Drozer (Android installed application) interacts with other Android applications via IPC (Inter Process Communication). It allows fingerprinting of application package-related information, its attack surface, and attempts to exploit those. Drozer is an attack framework and advanced level exploits can be conducted from it. We use Drozer to find vulnerabilities in our applications. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and follow the installation instructions mentioned in the user guide. Install Drozer console agent and start a session as mentioned in the User Guide. If your installation is correct, you should get Drozer command prompt (dz>). You should also have a few vulnerable applications as well to analyze. Here we chose OWASP GoatDroid application. How to do it... Every pentest starts with fingerprinting. Let us useDrozer for the same. The Drozer User Guide is very helpful for referring to the commands. The following command can be used to obtain information about anAndroid application package: run app.package.info -a <package name> We used the same to extract the information from the GoatDroid application and found the following results: Notice that apart from the general information about the application, User Permissions are also listed by Drozer. Further, let us analyze the attack surface. Drozer's attack surface lists the exposed activities, broadcast receivers, content providers, and services. The in-genuinely exposed ones may be a critical security risk and may provide you access to privileged content. Drozer has the following command to analyze the attack surface: run app.package.attacksurface <package name> We used the same to obtain the attack surface of the Herd Financial application of GoatDroid and the results can be seen in the following screenshot. Notice that one Activity and one Content Provider are exposed. We chose to attack the content provider to obtain the data stored locally. We used the followingDrozer command to analyze the content provider of the same application: run app.provider.info -a <package name> This gave us the details of the exposed content provider, which we used in another Drozer command: run scanner.provider.finduris -a <package name> We could successfully query the content providers. Lastly, we would be interested in stealing the data stored by this content provider. This is possible via another Drozer command: run app.provider.query content://<content provider details>/ The entire sequence of events is shown in the following screenshot: How it works... ADB is used to establish a connection between Drozer Python server (present on computer) and Drozer agent (.apk file installed in emulator or Android device). Drozer console is initialized to run the various commands we saw. Drozer agent utilizes theAndroid OS feature of IPC to take over the role of the target application and run the various commands as the original application. There's more... Drozer not only allows users to obtain the attack surface and steal data via content providers or launch intent injection attacks, but it is way beyond that. It can be used to fuzz the application, cause local injection attacks by providing a way to inject payloads. Drozer can also be used to run various in-built exploits and can be utilized to attack Android applications via custom-developed exploits. Further, it can also run in Infrastructure mode, allowing remote connections and remote attacks. See also Launching intent injection in Android https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf Auditing iOS application using static analysis Static analysis in source code reviews is an easier technique, and employing static string searches makes it convenient to use.Static analysis is conducted on the raw or decompiled source code or on the compiled (object) code, but the analysis is conducted outside of runtime. Usually, static analysis figures out vulnerable or insecure code patterns. Getting ready For conducting static analysis of iOS applications, we need at least one iOS application and a static code scanner. Pick up any iOS application of your choice and use any static analyzer tool of your choice. We will use iOS-ScriptDroid, which is a static analysis script, developed by Android security researcher, Dinesh Shetty. How to do it... Keep the decompressed iOS application filed and note the path of the folder containing the .m files. Create an iOS-ScriptDroid.bat file by using the following code: ECHO Running ScriptDriod ... @ECHO OFF SET /P Filelocation=Please Enter Location: :: SET Filelocation=Location of the folder containing all the .m files eg: C:sourcecodeproject iOSxyz mkdir %Filelocation%OUTPUT :: Code to check for Sensitive Information storage in Phone memory grep -H -i -n -C2 -e "NSFile" "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" grep -H -i -n -e "writeToFile " "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" :: Code to check for possible Buffer overflow grep -H -i -n -e "strcat(|strcpy(|strncat(|strncpy(|sprintf(|vsprintf(|gets(" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBufferOverflow.txt" :: Code to check for usage of URL Schemes grep -H -i -n -C2 "openUrl|handleOpenURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTURLSchemes.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "webview" "%Filelocation%*.m" >> "%Filelocation%OUTPUTprobableXss.txt" :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTtweakencryption.txt" >> "%Filelocation%OUTPUTweakencryption.txt" del %Filelocation%OUTPUTtweakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -e "http://" "%Filelocation%*.m" >> "%Filelocation%OUTPUToverhttp.txt" grep -H -i -n -e "NSURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "URL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "writeToUrl" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "NSURLConnection" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "CFStream" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "NSStreamin" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "setAllowsAnyHTTPSCertificate|kCFStreamSSLAllowsExpiredRoots |kCFStreamSSLAllowsExpiredCertificates" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "kCFStreamSSLAllowsAnyRoot|continueWithoutCredentialForAuthenticationChallenge" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" ::to add check for "didFailWithError" :: Code to presence of possible SQL Content grep -H -i -F -e "db" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "database" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "insert" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "delete" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "select" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "table" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "cursor" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_prepare" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_compile" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" :: Code to check for presence of keychain usage source code grep -H -i -n -e "kSecASttr|SFHFKkey" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for Logging mechanism grep -H -i -n -F "NSLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "XLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "ZNLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for presence of password in source code grep -H -i -n -e "password|pwd" "%Filelocation%*.m" >> "%Filelocation%OUTPUTpassword.txt" :: Code to check for Debugging status grep -H -i -n -e "#ifdef DEBUG" "%Filelocation%*.m" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers ===need to work more on this grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "CLLocationManager|startUpdatingLocation|locationManager|didUpdateToLocation |CLLocationDegrees|CLLocation|CLLocationDistance|startMonitoringSignificantLocationChanges" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.m" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt How it works... Go to the command prompt and navigate to the path where iOS-ScriptDroid is placed. Run the batch file and it prompts you to input the path of the application for which you wish to perform static analysis. In our case, we arbitrarily chose an application and inputted the path of the implementation (.m) files. The script generates a folder by the name OUTPUT in the path where the .m files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The iOS-ScriptDroid gives first hand info of various iOS applications vulnerabilities present in the current applications. For instance, here are a few of them which are specific to the iOS platform. BufferOverflow.txt contains the usage of harmful functions when missing buffer limits such as strcat, strcpy, and so on are found in the application. URL Schemes, if implemented in an insecure manner, may result in access related vulnerabilities. Usage of URL schemes is listed in URLSchemes.txt. These are sefuuseful vulnerabilitydetails to know iniOS applications via static analysis. There's more... The current recipe used just iOS-ScriptDroid but there are many other options available. You can either choose to write your own script or you may use one of the free or commercial tools available. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also Auditing Android apps using static analysis Auditing iOS application using a dynamic analyzer Dynamic analysis is theruntime analysis of the application. The application is run or simulated to discover the flaws during runtime. Dynamic analysis can be tricky, especially in the case of mobile platforms. Dynamic analysis is helpful in tampering data in transmission during runtime, for example, tampering with the amount in a transaction request being sent to a payment gateway. In applications that use custom encryption to prevent attackers reading the data, dynamic analysis is useful in revealing the encrypted data, which can be reverse-engineered. Note that since iOS applications cannot be decompiled to the full extent, dynamic analysis becomes even more important in finding the sensitive data which could have been hardcoded. Getting ready For conducting dynamic analysis of iOS applications, we need at least one iOS application and a dynamic code analyzer tool. Pick up any iOS application of your choice and use any dynamic analyzer tool of your choice. In this recipe, let us use the open source tool Snoop-it. We will use an iOS app that locks files which can only be opened using PIN, pattern, and a secret question and answer to unlock and view the file. Let us see if we can analyze this app and find a security flaw in it using Snoop-it. Please note that Snoop-it only works on jailbroken devices. To install Snoop-it on your iDevice, visit https://code.google.com/p/snoop-it/wiki/GettingStarted?tm=6. We have downloaded Locker Lite from the App Store onto our device, for analysis. How to do it... Perform the following steps to conductdynamic analysis oniOS applications: Open the Snoop-it app by tapping on its icon. Navigate to Settings. Here you will see the URL through which the interface can be accessed from your machine: Please note the URL, for we will be using it soon. We have disabled authentication for our ease. Now, on the iDevice, tap on Applications | Select App Store Apps and select the Locker app: Press the home button, and open the Locker app. Note that on entering the wrong PIN, we do not get further access: Making sure the workstation and iDevice are on the same network, open the previously noted URL in any browser. This is how the interface will look: Click on the Objective-C Classes link under Analysis in the left-hand panel: Now, click on SM_LoginManagerController. Class information gets loaded in the panel to the right of it. Navigate down until you see -(void) unlockWasSuccessful and click on the radio button preceding it: This method has now been selected. Next, click on the Setup and invoke button on the top-right of the panel. In the window that appears, click on the Invoke Method button at the bottom: As soon as we click on thebutton, we notice that the authentication has been bypassed, and we can view ourlocked file successfully. How it works... Snoop-it loads all classes that are in the app, and indicates the ones that are currently operational with a green color. Since we want to bypass the current login screen, and load directly into the main page, we look for UIViewController. Inside UIViewController, we see SM_LoginManagerController, which could contain methods relevant to authentication. On observing the class, we see various methods such as numberLoginSucceed, patternLoginSucceed, and many others. The app calls the unlockWasSuccessful method when a PIN code is entered successfully. So, when we invoke this method from our machine and the function is called directly, the app loads the main page successfully. There's more... The current recipe used just onedynamic analyzer but other options and tools can also be employed. There are many challenges in doingdynamic analysis of iOS applications. You may like to use multiple tools and not just rely on one to overcome the challenges. See also https://code.google.com/p/snoop-it/ Auditing Android apps using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Keychain iniOS is an encrypted SQLite database that uses a 128-bit AES algorithm to hold identities and passwords. On any iOS device, theKeychain SQLite database is used to store user credentials such as usernames, passwords, encryption keys, certificates, and so on. Developers use this service API to instruct the operating system to store sensitive data securely, rather than using a less secure alternative storage mechanism such as a property list file or a configuration file. In this recipe we will be analyzing Keychain dump to discover stored credentials. Getting ready Please follow the given steps to prepare for Keychain dump analysis: Jailbreak the iPhone or iPad. Ensure the SSH server is running on the device (default after jailbreak). Download the Keychain_dumper binary from https://github.com/ptoomey3/Keychain-Dumper Connect the iPhone and the computer to the same Wi-Fi network. On the computer, run SSH into the iPhone by typing the iPhone IP address, username as root, and password as alpine. How to do it... Follow these steps toexamine security vulnerabilities in iOS: Copy keychain_dumper into the iPhone or iPad by issuing the following command: scp root@<device ip>:keychain_dumper private/var/tmp Alternatively, Windows WinSCP can be used to do the same: Once the binary has been copied, ensure the keychain-2.db has read access: chmod +r /private/var/Keychains/keychain-2.db This is shown in the following screenshot: Give executable right to binary: chmod 777 /private/var/tmp/keychain_dumper Now, we simply run keychain_dumper: /private/var/tmp/keychain_dumper This command will dump all keychain information, which will contain all the generic and Internet passwords stored in the keychain: How it works... Keychain in an iOS device is used to securely store sensitive information such as credentials, such as usernames, passwords, authentication tokens for different applications, and so on, along with connectivity (Wi-Fi/VPN) credentials and so on. It is located on iOS devices as an encrypted SQLite database file located at /private/var/Keychains/keychain-2.db. Insecurity arises when application developers use this feature of the operating system to store credentials rather than storing it themselves in NSUserDefaults, .plist files, and so on. To provide users the ease of not having to log in every time and hence saving the credentials in the device itself, the keychain information for every app is stored outside of its sandbox. There's more... This analysis can also be performed for specific apps dynamically, using tools such as Snoop-it. Follow the steps to hook Snoop-it to the target app, click on Keychain Values, and analyze the attributes to see its values reveal in the Keychain. More will be discussed in further recipes. Finding vulnerabilities in WAP-based mobile apps WAP-based mobile applications are mobile applications or websites that run on mobile browsers. Most organizations create a lightweight version of their complex websites to be able to run easily and appropriately in mobile browsers. For example, a hypothetical company called ABCXYZ may have their main website at www.abcxyz.com, while their mobile website takes the form m.abcxyz.com. Note that the mobile website (or WAP apps) are separate from their installable application form, such as .apk on Android. Since mobile websites run on browsers, it is very logical to say that most of the vulnerabilities applicable to web applications are applicable to WAP apps as well. However, there are caveats to this. Exploitability and risk ratings may not be the same. Moreover, not all attacks may be directly applied or conducted. Getting ready For this recipe, make sure to be ready with the following set of tools (in the case of Android): ADB WinSCP Putty Rooted Android mobile SSH proxy application installed on Android phone Let us see the common WAP application vulnerabilities. While discussing these, we will limit ourselves to mobilebrowsers only: Browser cache: Android browsers store cache in two different parts—content cache and component cache. Content cache may contain basic frontend components such as HTML, CSS, or JavaScript. Component cache contains sensitive data like the details to be populated once content cache is loaded. You have to locate the browser cache folder and find sensitive data in it. Browser memory: Browser memory refers to the location used by browsers to store the data. Memory is usually long-term storage, while cache is short-term. Browse through the browser memory space for various files such as .db, .xml, .txt, and so on. Check all these files for the presence of sensitive data. Browser history: Browser history contains the list of the URLs browsed by the user. These URLs in GET request format contain parameters. Again, our goal is to locate a URL with sensitive data for our WAP application. Cookies: Cookies are mechanisms for websites to keep track of user sessions. Cookies are stored locally in devices. Following are the security concerns with respect to cookie usage: Sometimes a cookie contains sensitive information Cookie attributes, if weak, may make the application security weak Cookie stealing may lead to a session hijack How to do it... Browser Cache: Let's look at the steps that need to be followed with browser cache: Android browser cache can be found at this location: /data/data/com.android.browser/cache/webviewcache/. You can use either ADB to pull the data from webviewcache, or use WinSCP/Putty and connect to SSH application in rooted Android phones. Either way, you will land up at the webviewcache folder and find arbitrarily named files. Refer to the highlighted section in the following screenshot: Rename the extension of arbitrarily named files to .jpg and you will be able to view the cache in screenshot format. Search through all files for sensitive data pertaining to the WAP app you are searching for. Browser Memory: Like an Android application, browser also has a memory space under the /data/data folder by the name com.android.browser (default browser). Here is how a typical browser memory space looks: Make sure you traverse through all the folders to get the useful sensitive data in the context of the WAP application you are looking for. Browser history Go to browser, locate options, navigate to History, and find the URLs present there. Cookies The files containing cookie values can be found at /data/data/com.android.browser/databases/webview.db. These DB files can be opened with the SQLite Browser tool and cookies can be obtained. There's more... Apart from the primary vulnerabilities described here mainly concerned with browser usage, all otherweb application vulnerabilities which are related to or exploited from or within a browser are applicable and need to be tested: Cross-site scripting, a result of a browser executing unsanitized harmful scripts reflected by the servers is very valid for WAP applications. The autocomplete attribute not turned to off may result in sensitive data remembered by the browser for returning users. This again is a source of data leakage. Browser thumbnails and image buffer are other sources to look for data. Above all, all the vulnerabilities in web applications, which may not relate to browser usage, apply. These include OWASP Top 10 vulnerabilities such as SQL injection attacks, broken authentication and session management, and so on. Business logic validation is another important check to bypass. All these are possible by setting a proxy to the browser and playing around with the mobile traffic. The discussion of this recipe has been around Android, but all the discussion is fully applicable to an iOS platform when testing WAP applications. Approach, steps to test, and the locations would vary, but all vulnerabilities still apply. You may want to try out iExplorer and plist editor tools when working with an iPhone or iPad. See also http://resources.infosecinstitute.com/browser-based-vulnerabilities-in-web-applications/ Finding client-side injection Client-side injection is a new dimension to the mobile threat landscape. Client side injection (also known as local injection) is a result of the injection of malicious payloads to local storage to reveal data not by the usual workflow of the mobile application. If 'or'1'='1 is injected in a mobile application on search parameter, where the search functionality is built to search in the local SQLite DB file, this results in revealing all data stored in the corresponding table of SQLite DB; client side SQL injection is successful. Notice that the payload did not to go the database on the server side (which possibly can be Oracle or MSSQL) but it did go to the local database (SQLite) in the mobile. Since the injection point and injectable target are local (that is, mobile), the attack is called a client side injection. Getting ready To get ready to find client side injection, have a few mobile applications ready to be audited and have a bunch of tools used in many other recipes throughout this book. Note that client side injection is not easy to find on account of the complexities involved; many a time you will have to fine-tune your approach as per the successful first signs. How to do it... The prerequisite to the existence of client side injection vulnerability in mobile apps is the presence of a local storage and an application feature which queries the local storage. For the convenience of the first discussion, let us learn client side SQL injection, which is fairly easy to learn as users know very well SQL Injection in web apps. Let us take the case of a mobile banking application which stores the branch details in a local SQLite database. The application provides a search feature to users wishing to search a branch. Now, if a person types in the city as Mumbai, the city parameter is populated with the value Mumbai and the same is dynamically added to the SQLite query. The query builds and retrieves the branch list for Mumbai city. (Usually, purely local features are provided for faster user experience and network bandwidth conservation.) Now if a user is able to inject harmful payloads into the city parameter, such as a wildcard character or a SQLite payload to the drop table, and the payloads execute revealing all the details (in the case of a wildcard) or the payload drops the table from the DB (in the case of a drop table payload) then you have successfully exploited client side SQL injection. Another type of client side injection, presented in OWASP Mobile TOP 10 release, is local cross-site scripting (XSS). Refer to slide number 22 of the original OWASP PowerPoint presentation here: http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks. They referred to it as Garden Variety XSS and presented a code snippet, wherein SMS text was accepted locally and printed at UI. If a script was inputted in SMS text, it would result in local XSS (JavaScript Injection). There's more... In a similar fashion, HTML Injection is also possible. If an HTML file contained in the application local storage can be compromised to contain malicious code and the application has a feature which loads or executes this HTML file, HTML injection is possible locally. A variant of the same may result in Local File Inclusion (LFI) attacks. If data is stored in the form of XML files in the mobile, local XML Injection can also be attempted. There could be morevariants of these attacks possible. Finding client-side injection is quite difficult and time consuming. It may need to employ both static and dynamic analysis approaches. Most scanners also do not support discovery of Client Side Injection. Another dimension to Client Side Injection is the impact, which is judged to be low in most cases. There is a strong counter argument to this vulnerability. If the entire local storage can be obtained easily in Android, then why do we need to conduct Client Side Injection? I agree to this argument in most cases, as the entire SQLite or XML file from the phone can be stolen, why spend time searching a variable that accepts a wildcard to reveal the data from the SQLite or XML file? However, you should still look out for this vulnerability, as HTML injection or LFI kind of attacks have malware-corrupted file insertion possibility and hence the impactful attack. Also, there are platforms such as iOS where sometimes, stealing the local storage is very difficult. In such cases, client side injection may come in handy. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M7 http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks Insecure encryption in mobile apps Encryption is one of the misused terms in information security. Some people confuse it with hashing, while others may implement encoding and call itencryption. symmetric key and asymmetric key are two types of encryption schemes. Mobile applications implement encryption to protect sensitive data in storage and in transit. While doing audits, your goal should be to uncover weak encryption implementation or the so-called encoding or other weaker forms, which are implemented in places where a proper encryption should have been implemented. Try to circumvent the encryption implemented in the mobile application under audit. Getting ready Be ready with a fewmobile applications and tools such as ADB and other file and memory readers, decompiler and decoding tools, and so on. How to do it... There are multiple types of faulty implementation ofencryption in mobile applications. There are different ways to discover each of them: Encoding (instead of encryption): Many a time, mobile app developers simply implement Base64 or URL encoding in applications (an example of security by obscurity). Such encoding can be discovered by simply doing static analysis. You can use the script discussed in the first recipe of this article for finding out such encoding algorithms. Dynamic analysis will help you obtain the locally stored data in encoded format. Decoders for these known encoding algorithms are available freely. Using any of those, you will be able to uncover the original value. Thus, such implementation is not a substitute for encryption. Serialization (instead of encryption): Another variation of faulty implementation is serialization. Serialization is the process of conversion of data objects to byte stream. The reverse process, deserialization, is also very simple and the original data can be obtained easily. Static Analysis may help reveal implementations using serialization. Obfuscation (instead of encryption): Obfuscation also suffers from similar problems and the obfuscated values can be deobfuscated. Hashing (instead of encryption): Hashing is a one-way process using a standard complex algorithm. These one-way hashes suffer from a major problem in that they can be replayed (without needing to recover the original data). Also, rainbow tables can be used to crack the hashes. Like other techniques described previously, hashing usage in mobile applications can also be discovered via static analysis. Dynamic analysis may additionally be employed to reveal the one-way hashes stored locally. How it works... To understand the insecure encryption in mobile applications, let us take a live case, which we observed. An example of weak custom implementation While testing a live mobile banking application, me and my colleagues came across a scenario where a userid and mpin combination was sent by a custom encoding logic. The encoding logic here was based on a predefined character by character replacement by another character, as per an in-built mapping. For example: 2 is replaced by 4 0 is replaced by 3 3 is replaced by 2 7 is replaced by = a is replaced by R A is replaced by N As you can notice, there is no logic to the replacement. Until you uncover or decipher the whole in-built mapping, you won't succeed. A simple technique is to supply all possible characters one-by-one and watch out for the response. Let's input userid and PIN as 222222 and 2222 and notice the converted userid and PIN are 444444 and 4444 respectively, as per the mapping above. Go ahead and keep changing the inputs, you will create a full mapping as is used in the application. Now steal the user's encoded data and apply the created mapping, thereby uncovering the original data. This whole approach is nicely described in the article mentioned under the See also section of this recipe. This is a custom example of faulty implementation pertaining to encryption. Such kinds of faults are often difficult to find in static analysis, especially in the case of difficult to reverse apps such as iOS applications. The possibility of automateddynamic analysis discovering this is also difficult. Manual testing and analysis stands, along with dynamic or automated analysis, a better chance of uncovering such customimplementations. There's more... Finally, I would share another application we came across. This one used proper encryption. The encryption algorithm was a well known secure algorithm and the key was strong. Still, the whole encryption process can be reversed. The application had two mistakes; we combined both of them to break the encryption: The application code had the standard encryption algorithm in the APK bundle. Not even obfuscation was used to protect the names at least. We used the simple process of APK to DEX to JAR conversion to uncover the algorithm details. The application had stored the strong encryption key in the local XML file under the /data/data folder of the Android device. We used adb to read this xml file and hence obtained the encryption key. According to Kerckhoff's principle, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer. This is how all encryption algorithms are implemented. The key is the secret, not the algorithm. In our scenario, we could obtain the key and know the name of the encryption algorithm. This is enough to break the strong encryption implementation. See also http://www.paladion.net/index.php/mobile-phone-data-encryption-why-is-it-necessary/ Discovering data leakage sources Data leakage risk worries organizations across the globe and people have been implementing solutions to prevent data leakage. In the case of mobile applications, first we have to think what could be the sources or channels for data leakage possibility. Once this is clear, devise or adopt a technique to uncover each of them. Getting ready As in other recipes, here also you need bunch of applications (to be analyzed), an Android device or emulator, ADB, DEX to JAR converter, Java decompilers, Winrar, or Winzip. How to do it... To identify the data leakage sources, list down all possible sources you can think of for the mobile application under audit. In general, all mobile applications have the following channels of potential data leakage: Files stored locally Client side source code Mobile device logs Web caches Console messages Keystrokes Sensitive data sent over HTTP How it works... The next step is to uncover the data leakage vulnerability at these potential channels. Let us see the six previously identified common channels: Files stored locally: By this time, readers are very familiar with this. The data is stored locally in files like shared preferences, xml files, SQLite DB, and other files. In Android, these are located inside the application folder under /data/data directory and can be read using tools such as ADB. In iOS, tools such as iExplorer or SSH can be used to read the application folder. Client side source code: Mobile application source code is present locally in the mobile device itself. The source code in applications has been hardcoding data, and a common mistake is hardcoding sensitive data (either knowingly or unknowingly). From the field, we came across an application which had hardcoded the connection key to the connected PoS terminal. Hardcoded formulas to calculate a certain figure, which should have ideally been present in the server-side code, was found in the mobile app. Database instance names and credentials are also a possibility where the mobile app directly connects to a server datastore. In Android, the source code is quite easy to decompile via a two-step process—APK to DEX and DEX to JAR conversion. In iOS, the source code of header files can be decompiled up to a certain level using tools such as classdump-z or otool. Once the raw source code is available, a static string search can be employed to discover sensitive data in the code. Mobile device logs: All devices create local logs to store crash and other information, which can be used to debug or analyze a security violation. A poor coding may put sensitive data in local logs and hence data can be leaked from here as well. Android ADB command adb logcat can be used to read the logs on Android devices. If you use the same ADB command for the Vulnerable Bank application, you will notice the user credentials in the logs as shown in the following screenshot: Web caches: Web caches may also contain the sensitive data related to web components used in mobile apps. We discussed how to discover this in the WAP recipe in this article previously. Console messages: Console messages are used by developers to print messages to the console while application development and debugging is in progress. Console messages, if not turned off while launching the application (GO LIVE), may be another source of data leakage. Console messages can be checked by running the application in debug mode. Keystrokes: Certain mobile platforms have been known to cache key strokes. A malware or key stroke logger may take advantage and steal a user's key strokes, hence making it another data leakage source. Malware analysis needs to be performed to uncover embedded or pre-shipped malware or keystroke loggers with the application. Dynamic analysis also helps. Sensitive data sent over HTTP: Applications either send sensitive data over HTTP or use a weak implementation of SSL. In either case, sensitive data leakage is possible. Usage of HTTP can be found via static analysis to search for HTTP strings. Dynamic analysis to capture the packets at runtime also reveals whether traffic is over HTTP or HTTPS. There are various SSL-related weak implementation and downgrade attacks, which make data vulnerable to sniffing and hence data leakage. There's more... Data leakage sources can be vast and listing all of them does not seem possible. Sometimes there are applications or platform-specific data leakage sources, which may call for a different kind of analysis. Intent injection can be used to fire intents to access privileged contents. Such intents may steal protected data such as the personal information of all the patients in a hospital (under HIPPA compliance). iOS screenshot backgrounding issues, where iOS applications store screenshots with populated user input data, on the iPhone or iPAD when the application enters background. Imagine such screenshots containing a user's credit card details, CCV, expiry date, and so on, are found in an application under PCI-DSS compliance. Malwares give a totally different angle to data leakage. Note that data leakage is a very big risk organizations are tackling today. It is not just financial loss; losses may be intangible, such as reputation damage, or compliance or regulatory violations. Hence, it makes it very important to identify the maximum possible data leakage sources in the application and rectify the potential leakages. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M4 Launching intent injection in Android Other application-based attacks in mobile devices When we talk about application-based attacks, OWASP TOP 10 risks are the very first things that strike. OWASP (www.owasp.org) has a dedicated project to mobile security, which releases Mobile Top 10. OWASP gathers data from industry experts and ranks the top 10 risks every three years. It is a very good knowledge base for mobile application security. Here is the latest Mobile Top 10 released in the year 2014: M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections Getting ready Have a few applications ready to be analyzed, use the same set of tools we have been discussing till now. How to do it... In this recipe, we restrict ourselves to other application attacks. The attacks which we have not covered till now in this book are: M1: Weak Server Side Controls M5: Poor Authorization and Authentication M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling How it works... Currently, let us discuss client-side or mobile-side issues for M5, M8, and M9. M5: Poor Authorization and Authentication A few common scenarios which can be attacked are: Authentication implemented at device level (for example, PIN stored locally) Authentication bound on poor parameters (such as UDID or IMEI numbers) Authorization parameter responsible for access to protected application menus is stored locally These can be attacked by reading data using ADB, decompiling the applications, and conducting static analysis on the same or by doing dynamic analysis on the outgoing traffic. M8: Security Decisions via Untrusted Inputs This one talks about IPC. IPC entry points forapplications to communicate to one other, such as Intents in Android or URL schemes in iOS, are vulnerable. If the origination source is not validated, the application can be attacked. Malicious intents can be fired to bypass authorization or steal data. Let us discuss this in further detail in the next recipe. URL schemes are a way for applications to specify the launch of certain components. For example, the mailto scheme in iOS is used to create a new e-mail. If theapplications fail to specify the acceptable sources, any malicious application will be able to send a mailto scheme to the victim application and create new e-mails. M9: Improper Session Handling From a purely mobile device perspective, session tokens stored in .db files or oauth tokens, or strings granting access stored in weakly protected files, are vulnerable. These can be obtained by reading the local data folder using ADB. See also https://www.owasp.org/index.php/P;rojects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks Launching intent injection in Android Android uses intents to request action from another application component. A common communication is passing Intent to start a service. We will exploit this fact via an intent injection attack. An intent injection attack works by injecting intent into the application component to perform a task that is usually not allowed by the application workflow. For example, if the Android application has a login activity which, post successful authentication, allows you access to protected data via another activity. Now if an attacker can invoke the internal activity to access protected data by passing an Intent, it would be an Intent Injection attack. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and following the installation instructions mentioned in the User Guide. Install Drozer Console Agent and start a session as mentioned in the User Guide. If your installation is correct, you should get a Drozer command prompt (dz>).   How to do it... You should also have a few vulnerable applications to analyze. Here we chose the OWASP GoatDroid application: Start the OWASP GoatDroid Fourgoats application in emulator. Browse the application to develop understanding. Note that you are required to authenticate by providing a username and password, and post-authentication you can access profile and other pages. Here is the pre-login screen you get: Let us now use Drozer to analyze the activities of the Fourgoats application. The following Drozer command is helpful: run app.activity.info -a <package name> Drozer detects four activities with null permission. Out of these four, ViewCheckin and ViewProfile are post-login activities. Use Drozer to access these two activities directly, via the following command: run app.activity.start --component <package name> <activity name> We chose to access ViewProfile activity and the entire sequence of activities is shown in the following screenshot: Drozer performs some actions and the protected user profile opens up in the emulator, as shown here: How it works... Drozer passed an Intent in the background to invoke the post-login activity ViewProfile. This resulted in ViewProfile activity performing an action resulting in display of profile screen. This way, an intent injection attack can be performed using Drozer framework. There's more... Android usesintents also forstarting a service or delivering a broadcast. Intent injection attacks can be performed on services and broadcast receivers. A Drozer framework can also be used to launch attacks on the app components. Attackers may write their own attack scripts or use different frameworks to launch this attack. See also Using Drozer to find vulnerabilities in Android applications https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf https://www.eecs.berkeley.edu/~daw/papers/intents-mobisys11.pdf Resources for Article: Further resources on this subject: Mobile Devices[article] Development of Windows Mobile Applications (Part 1)[article] Development of Windows Mobile Applications (Part 2)[article]
Read more
  • 0
  • 0
  • 30291
Modal Close icon
Modal Close icon