Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-decoding-why-good-php-developerisnt-oxymoron
Packt
14 Sep 2016
20 min read
Save for later

Decoding Why "Good PHP Developer"Isn't an Oxymoron

Packt
14 Sep 2016
20 min read
In this article by Junade Ali, author of the book Mastering PHP Design Patterns, we will be revisiting object-oriented programming. Back in 2010 MailChimp published a post on their blog, it was entitled Ewww, You Use PHP? In this blog post they described the horror when they explained their choice of PHP to developers who consider the phrase good PHP programmer an oxymoron. In their rebuttal they argued that their PHP wasn't your grandfathers PHP and they use a sophisticated framework. I tend to judge the quality of PHP on the basis of, not only how it functions, but how secure it is and how it is architected. This book focuses on ideas of how you should architect your code. The design of software allows for developers to ease the extension of the code beyond its original purpose, in a bug free and elegant fashion. (For more resources related to this topic, see here.) As Martin Fowler put it: Any fool can write code that a computer can understand. Good programmers write code that humans can understand. This isn't just limited to code style, but how developers architect and structure their code. I've encountered many developers with their noses constantly stuck in documentation, copying and pasting bits of code until it works; hacking snippets together until it works. Moreover, I far too often see the software development process rapidly deteriorate as developers ever more tightly couple their classes with functions of ever increasing length. Software engineers mustn't just code software; they must know how to design it. Indeed often a good software engineer, when interviewing other software engineers will ask questions surrounding the design of the code itself. It is trivial to get a piece of code that will execute, and it is also benign to question a developer as to whether strtolower or str2lower is the correct name of a function (for the record, it's strtolower). Knowing the difference between a class and an object doesn't make you a competent developer; a better interview question would, for example, be how one could apply subtype polymorphism to a real software development challenge. Failure to assess software design skills dumbs down an interview and results in there being no way to differentiate between those who are good at it, and those who aren't. These advanced topics will be discussed throughout this book, by learning these tactics you will better understand what the right questions to ask are when discussing software architecture. Moxie Marlinspike once tweeted: As a software developer, I envy writers, musicians, and filmmakers. Unlike software, when they create something it is really done, forever. When developing software we mustn't forget we are authors, not just of instructions for a machine, but we are also authoring something that we later expect others to extend upon. Therefore, our code mustn't just be targeted at machines, but humans also. Code isn't just poetry for a machine, it should be poetry for humans also. This is, of course, better said than done. In PHP this may be found especially difficult given the freedom PHP offers developers on how they may architect and structure their code. By the very nature of freedom, it may be both used and abused, so it is true with the freedom offered in PHP. PHP offers freedom to developers to decide how to architect this code. By the very nature of freedom it can be both used and abused, so it is true with the freedom offered in PHP. Therefore, it is increasingly important that developers understand proper software design practices to ensure their code maintains long term maintainability. Indeed, another key skill lies in refactoringcode, improving design of existing code to make it easier to extend in the longer term Technical debt, the eventual consequence of poor system design, is something that I've found comes with the career of a PHP developer. This has been true for me whether it has been dealing with systems that provide advanced functionality or simple websites. It usually arises because a developer elects to implement bad design for a variety of reasons; this is when adding functionality to an existing codebase or taking poor design decisions during the initial construction of software. Refactoring can help us address these issues. SensioLabs (the creators of the Symfonyframework) have a tool called Insight that allows developers to calculate the technical debt in their own code. In 2011 they did an evaluation of technical debt in various projects using this tool; rather unsurprisingly they found that WordPress 4.1 topped the chart of all platforms they evaluated with them claiming it would take 20.1 years to resolve the technical debt that the project contains. Those familiar with the WordPress core may not be surprised by this, but this issue of course is not only associated to WordPress. In my career of working with PHP, from working with security critical cryptography systems to working with systems that work with mission critical embedded systems, dealing with technical debt comes with the job. Dealing with technical debt is not something to be ashamed of for a PHP Developer, indeed some may consider it courageous. Dealing with technical debt is no easy task, especially in the face of an ever more demanding user base, client, or project manager; constantly demanding more functionality without being familiar with the technical debt the project has associated to it. I recently emailed the PHP Internals group as to whether they should consider deprecating the error suppression operator @. When any PHP function is prepended by an @ symbol, the function will suppress an error returned by it. This can be brutal; especially where that function renders a fatal error that stops the execution of the script, making debugging a tough task. If the error is suppressed, the script may fail to execute without providing developers a reason as to why this is. Despite the fact that no one objected to the fact that there were better ways of handling errors (try/catch, proper validation) than abusing the error suppression operator and that deprecation should be an eventual aim of PHP, it is the case that some functions return needless warnings even though they already have a success/failure value. This means that due to technical debt in the PHP core itself, this operator cannot be deprecated until a lot of other prerequisite work is done. In the meantime, it is down to developers to decide the best methodologies of handling errors. Until the inherent problem of unnecessary error reporting is addressed, this operator cannot be deprecated. Therefore, it is down to developers to be educated as to the proper methodologies that should be used to address error handling and not to constantly resort to using an @ symbol. Fundamentally, technical debt slows down development of a project and often leads to code being deployed that is broken as developers try and work on a fragile project. When starting a new project, never be afraid to discus architecture as architecture meetings are vital to developer collaboration; as one scrum master I've worked with said in the face of criticism that "meetings are a great alternative to work", he said "meetings are work…how much work would you be doing without meetings?". Coding style - thePSR standards When it comes to coding style, I would like to introduce you to the PSR standards created by the PHP Framework Interop Group. Namely, the two standards that apply to coding standards are PSR-1 (Basic Coding Style) and PSR-2 (Coding Style Guide). In addition to this there are PSR standards that cover additional areas, for example, as of today; the PSR-4 standard is the most up-to-date autoloading standard published by the group. You can find out more about the standards at http://www.php-fig.org/. Coding style being used to enforce consistency throughout a codebase is something I strongly believe in, it does make a difference to your code readability throughout a project. It is especially important when you are starting a project (chances are you may be reading this book to find out how to do that right) as your coding style determines the style the developers following you in working on this project will adopt. Using a global standard such as PSR-1 or PSR-2 means that developers can easily switch between projects without having to reconfigure their code style in their IDE. Good code style can make formatting errors easier to spot. Needless to say that coding styles will develop as time progresses, to date I elect to work with the PSR standards. I am a strong believer in the phrase: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. It isn't known who wrote this phrase originally; but it's widely thought that it could have been John Woods or potentially Martin Golding. I would strongly recommend familiarizingyourself with these standards before proceeding in this book. Revising object-oriented programming Object-oriented programming is more than just classes and objects, it's a whole programming paradigm based around objects(data structures) that contain data fields and methods. It is essential to understand this; using classes to organize a bunch of unrelated methods together is not object orientation. Assuming you're aware of classes (and how to instantiate them), allow me to remind you of a few different bits and pieces. Polymorphism Polymorphism is a fairly long word for a fairly simple concept. Essentially, polymorphism means the same interfaceis used with a different underlying code. So multiple classes could have a draw function, each accepting the same arguments, but at an underlying level the code is implemented differently. In this article, I would also like to talk about Subtype Polymorphism in particular (also known as Subtyping or Inclusion Polymorphism). Let's say we have animals as our supertype;our subtypes may well be cats, dogs, and sheep. In PHP, interfaces allow you to define a set of functionality that a class that implements it must contain, as of PHP 7 you can also use scalar type hints to define the return types we expect. So for example, suppose we defined the following interface: interface Animal { public function eat(string $food) : bool; public function talk(bool $shout) : string; } We could then implement this interface in our own class, as follows: class Cat implements Animal { } If we were to run this code without defining the classes we would get an error message as follows: Class Cat contains 2 abstract methods and must therefore be declared abstract or implement the remaining methods (Animal::eat, Animal::talk) Essentially, we are required to implement the methods we defined in our interface, so now let's go ahead and create a class that implements these methods: class Cat implements Animal { public function eat(string $food): bool { if ($food === "tuna") { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "MEOW!"; } else { return "Meow."; } } } Now that we've implemented these methods we can then just instantiate the class we are after and use the functions contained in it: $felix = new Cat(); echo $felix->talk(false); So where does polymorphism come into this? Suppose we had another class for a dog: class Dog implements Animal { public function eat(string $food): bool { if (($food === "dog food") || ($food === "meat")) { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "WOOF!"; } else { return "Woof woof."; } } } Now let's suppose we have multiple different types of animals in a pets array: $pets = array( 'felix' => new Cat(), 'oscar' => new Dog(), 'snowflake' => new Cat() ); We can now actually go ahead and loop through all these pets individually in order to run the talk function.We don't care about the type of pet because the talkmethod that is implemented in every class we getis by virtue of us having extended the Animals interface. So let's suppose we wanted to have all our animals run the talk method, we could just use the following code: foreach ($pets as $pet) { echo $pet->talk(false); } No need for unnecessary switch/case blocks in order to wrap around our classes, we just use software design to make things easier for us in the long-term. Abstract classes work in a similar way, except for the fact that abstract classes can contain functionality where interfaces cannot. It is important to note that any class that defines one or more abstract classes must also be defined as abstract. You cannot have a normal class defining abstract methods, but you can have normal methods in abstract classes. Let's start off by refactoring our interface to be an abstract class: abstract class Animal { abstract public function eat(string $food) : bool; abstract public function talk(bool $shout) : string; public function walk(int $speed): bool { if ($speed > 0) { return true; } else { return false; } } } You might have noticed that I have also added a walk method as an ordinary, non-abstract method; this is a standard method that can be used or extended by any classes that inherit the parent abstract class. They already have implementation. Note that it is impossible to instantiate an abstract class (much like it's not possible to instantiate an interface). Instead we must extend it. So, in our Cat class let's substitute: class Cat implements Animal With the following code: class Cat extends Animal That's all we need to refactor in order to get classes to extend the Animal abstract class. We must implement the abstract functions in the classes as we outlined for the interfaces, plus we can use the ordinary functions without needing to implement them: $whiskers = new Cat(); $whiskers->walk(1); As of PHP 5.4 it has also become possible to instantiate a class and access a property of it in one system. PHP.net advertised it as: Class member access on instantiation has been added, e.g. (new Foo)->bar(). You can also do it with individual properties, for example,(new Cat)->legs. In our example, we can use it as follows: (new IcyAprilChapterOneCat())->walk(1); Just to recap a few other points about how PHP implemented OOP, the final keyword before a class declaration or indeed a function declaration means that you cannot override such classes or functions after they've been defined. So, if we were to try extending a class we have named as final: final class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { } This results in the following output: Fatal error: Class Cat may not inherit from final class (Animal) Similarly, if we were to do the same except at a function level: class Animal { final public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } This results in the following output: Fatal error: Cannot override final method Animal::walk() Traits (multiple inheritance) Traits were introduced into PHP as a mechanism for introducing Horizontal Reuse. PHP conventionally acts as a single inheritance language, namely because of the fact that you can't inherit more than one class into a script. Traditional multiple inheritance is a controversial process that is often looked down upon by software engineers. Let me give you an example of using Traits first hand; let's define an abstract Animal class which we want to extend into another class: class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } So now let's suppose we have a function to name our class, but we don't want it to apply to all our classes that extend the Animal class, we want it to apply to certain classes irrespective of whether they inherit the properties of the abstract Animal class or not. So we've defined our functions like so: function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } The problem now is that there is no place we can put them without using Horizontal Reuse, apart from copying and pasting different bits of code or resorting to using conditional inheritance. This is where Traits come to the rescue; let's start off by wrapping these methods in a Trait called Name: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } So now that we've defined our Trait, we can just tell PHP to use it in our Cat class: class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } Notice the use of theName statement? That's where the magic happens. Now you can call the functions in that Trait without any problems: $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; All put together, the new code block looks as follows: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; Scalar type hints Let me take this opportunity to introduce you to a PHP7 concept known as scalar type hinting; it allows you to define the return types (yes, I know this isn't strictly under the scope of OOP; deal with it). Let's define a function, as follows: function addNumbers (int $a, int $b): int { return $a + $b; } Let's take a look at this function; firstly you will notice that before each of the arguments we define the type of variable we want to receive, in this case,int or integer. Next up you'll notice there's a bit of code after the function definition : int, which defines our return type so our function can only receive an integer. If you don't provide the right type of variable as a function argument or don't return the right type of variable from the function; you will get a TypeError exception. In strict mode, PHP will also throw a TypeError exception in the event that strict mode is enabled and you also provide the incorrect number of arguments. It is also possible in PHP to define strict_types; let me explain why you might want to do this. Without strict_types, PHP will attempt to automatically convert a variable to the defined type in very limited circumstances. For example, if you pass a string containing solely numbers it will be converted to an integer, a string that's non-numeric, however, will result in a TypeError exception. Once you enable strict_typesthis all changes, you can no longer have this automatic casting behavior. Taking our previous example, without strict_types, you could do the following: echo addNumbers(5, "5.0"); Trying it again after enablingstrict_types, you will find that PHP throws a TypeError exception. This configuration only applies on an individual file basis, putting it before you include other files will not result in this configuration being inherited to those files. There are multiple benefits of why PHP chose to go down this route; they are listed very clearly in Version: 0.5.3 of the RFC that implemented scalar type hints called PHP RFC: Scalar Type Declarations. You can read about it by going to http://www.wiki.php.net (the wiki, not the main PHP website) and searching for scalar_type_hints_v5. In order to enable it, make sure you put this as the very first statement in your PHP script: declare(strict_types=1); This will not work unless you define strict_typesas the very first statement in a PHP script; no other usages of this definition are permitted. Indeed if you try to define it later on, your script PHP will throw a fatal error. Of course, in the interests of the rage induced PHP core fanatic reading this book in its coffee stained form, I should mention that there are other valid types that can be used in type hinting. For example, PHP 5.1.0 introduced this with arrays and PHP 5.0.0 introduced the ability for a developer to do this with their own classes. Let me give you a quick example of how this would work in practice, suppose we had an Address class: class Address { public $firstLine; public $postcode; public $country; public function __construct(string $firstLine, string $postcode, string $country) { $this->firstLine = $firstLine; $this->postcode = $postcode; $this->country = $country; } } We can then type the hint of the Address class that we inject into a Customer class: class Customer { public $name; public $address; public function __construct($name, Address $address) { $this->name = $name; $this->address = $address; } } And just to show how it all can come together: $address = new Address('10 Downing Street', 'SW1A2AA', 'UK'); $customer = new Customer('Davey Cameron', $address); var_dump($customer); Limiting debug access to private/protected properties If you define a class which contains private or protected variables, you will notice an odd behavior if you were to var_dumpthe object of that class. You will notice that when you wrap the object in a var_dumpit reveals all variables; be they protected, private, or public. PHP treats var_dump as an internal debugging function, meaning all data becomes visible. Fortunately, there is a workaround for this. PHP 5.6 introduced the __debugInfo magic method. Functions in classes preceded by a double underscore represent magic methods and have special functionality associated to them. Every time you try to var_dump an object that has the __debugInfo magic method set, the var_dump will be overridden with the result of that function call instead. Let me show you how this works in practice, let's start by defining a class: class Bear { private $hasPaws = true; } Let's instantiate this class: $richard = new Bear(); Now if we were to try and access the private variable that ishasPaws, we would get a fatal error; so this call: echo $richard->hasPaws; Would result in the following fatal error being thrown: Fatal error: Cannot access private property Bear::$hasPaws That is the expected output, we don't want a private property visible outside its object. That being said, if we wrap the object with a var_dump as follows: var_dump($richard); We would then get the following output: object(Bear)#1 (1) { ["hasPaws":"Bear":private]=> bool(true) } As you can see, our private property is marked as private, but nevertheless it is visible. So how would we go about preventing this? So, let's redefine our class as follows: class Bear { private $hasPaws = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } Now, after we instantiate our class and var_dump the resulting object, we get the following output: object(Bear)#1 (0) { } The script all put together looks like this now, you will notice I've added an extra public property called growls, which I have set to true: <?php class Bear { private $hasPaws = true; public $growls = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } $richard = new Bear(); var_dump($richard); If we were to var_dump this script (with both public and private property to play with), we would get the following output: object(Bear)#1 (1) { ["growls"]=> bool(true) } As you can see, only the public property is visible. So what is the moral of the story from this little experiment? Firstly, that var_dumps exposesprivate and protected properties inside objects, and secondly, that this behavior can be overridden. Summary In this article, we revised some PHP principles, including OOP principles. We also revised some PHP syntax basics. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 17843

article-image-hello-tdd
Packt
14 Sep 2016
6 min read
Save for later

Hello TDD!

Packt
14 Sep 2016
6 min read
In this article by Gaurav Sood, the author of the book Scala Test-Driven Development, tells  basics of Test-Driven Development. We will explore: What is Test-Driven Development? What is the need for Test-Driven Development? Brief introduction to Scala and SBT (For more resources related to this topic, see here.) What is Test-Driven Development? Test-Driven Development or TDD(as it is referred to commonly)is the practice of writing your tests before writing any application code. This consists of the following iterative steps: This process is also referred to asRed-Green-Refactor-Repeat. TDD became more prevalent with the use of agile software development process, though it can be used as easily with any of the Agile's predecessors like Waterfall. Though TDD is not specifically mentioned in agile manifesto (http://agilemanifesto.org), it has become a standard methodology used with agile. Saying this, you can still use agile without using TDD. Why TDD? The need for TDD arises from the fact that there can be constant changes to the application code. This becomes more of a problem when we are using agile development process, as it is inherently an iterative development process. Here are some of the advantages, which underpin the need for TDD: Code quality: Tests on itself make the programmer more confident of their code. Programmers can be sure of syntactic and semantic correctness of their code. Evolving architecture: A purely test-driven application code gives way to an evolving architecture. This means that we do not have to pre define our architectural boundaries and the design patterns. As the application grows so does the architecture. This results in an application that is flexible towards future changes. Avoids over engineering: Tests that are written before the application code define and document the boundaries. These tests also document the requirements and application code. Agile purists normally regard comments inside the code as a smell. According to them your tests should document your code. Since all the boundaries are predefined in the tests, it is hard to write application code, which breaches these boundaries. This however assumes that TDD is following religiously. Paradigm shift: When I had started with TDD, I noticed that the first question I asked myself after looking at the problem was; "How can I solve it?" This however is counterproductive. TDD forces the programmer to think about the testability of the solution before its implementation. To understand how to test a problem would mean a better understanding of the problem and its edge cases. This in turn can result into refinement of the requirements or discovery or some new requirements. Now it had become impossible for me not to think about testability of the problem before the solution. Now the first question I ask myself is; "How can I test it?". Maintainable code: I have always found it easier to work on an application that has historically been test-driven rather than on one that is not. Why? Only because when I make change to the existing code, the existing tests make sure that I do not break any existing functionality. This results in highly maintainable code, where many programmers can collaborate simultaneously. Brief introduction to Scala and SBT Let us look at Scala and SBT briefly. It is assumed that the reader is familiar with Scala and therefore will not go into the depth of it. What is Scala Scala is a general-purpose programming language. Scala is an acronym for Scalable Language. This reflects the vision of its creators of making Scala a language that grows with the programmer's experience of it. The fact that Scala and Java objects can be freely mixed, makes transition from Java to Scala quite easy. Scala is also a full-blown functional language. Unlike Haskell, which is a pure functional language, Scala allows interoperability with Java and support for objectoriented programming. Scala also allows use of both pure and impure functions. Impure functions have side affect like mutation, I/O and exceptions. Purist approach to Scala programming encourages use of pure functions only. Scala is a type-safe JVM language that incorporates both object oriented and functional programming into an extremely concise, logical, and extraordinarily powerful language. Why Scala? Here are some advantages of using Scala: Functional solution to problem is always better: This is my personal view and open for contention. Elimination of mutation from application code allows application to be run in parallelacross hosts and cores without any deadlocks. Better concurrency model: Scala has an actor model that is better than Java's model of locks on thread. Concise code:Scala code is more concise than itsmore verbose cousin Java. Type safety/ static typing: Scala does type checking at compile time. Pattern matching: Case statements in Scala are superpowerful. Inheritance:Mixin traits are great and they definitely reduce code repetition. There are other features of Scala like closure and monads, which will need more understanding of functional language concepts to learn. Scala Build Tool Scala Build Tool (SBT) is a build tool that allows compiling, running, testing, packaging, and deployment of your code. SBT is mostly used with Scala projects, but it can as easily be used for projects in other languages. Here, we will be using SBT as a build tool for managing our project and running our tests. SBT is written in Scala and can use many of the features of Scala language. Build definitions for SBT are also written in Scala. These definitions are both flexible and powerful. SBT also allows use of plugins and dependency management. If you have used a build tool like Maven or Gradlein any of your previous incarnations, you will find SBT a breeze. Why SBT? Better dependency management Ivy based dependency management Only-update-on-request model Can launch REPL in project context Continuous command execution Scala language support for creating tasks Resources for learning Scala Here are few of the resources for learning Scala: http://www.scala-lang.org/ https://www.coursera.org/course/progfun https://www.manning.com/books/functional-programming-in-scala http://www.tutorialspoint.com/scala/index.htm Resources for SBT Here are few of the resources for learning SBT: http://www.scala-sbt.org/ https://twitter.github.io/scala_school/sbt.html Summary In this article we learned what is TDD and why to use it. We also learned about Scala and SBT.  Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding TDD [article] Android Application Testing: TDD and the Temperature Converter [article]
Read more
  • 0
  • 0
  • 8952

article-image-getting-started-cloud-only-scenario
Packt
13 Sep 2016
23 min read
Save for later

Getting Started with a Cloud-Only Scenario

Packt
13 Sep 2016
23 min read
In this article by Jochen Nickel from the book, Mastering Identity and Access Management with Microsoft Azure, we will first start with a business view to identify the important business needs and challenges of a cloud-only environment and scenario. Throughout thisarticle, we will also discuss the main features of and licensing information for such an approach. Finally,we will round up with the challenges surroundingsecurity and legal requirements. The topics we will cover in this articleare as follows: Identifying the business needs and challenges An overview of feature and licensing decisions Defining the benefits and costs Principles of security and legal requirements (For more resources related to this topic, see here.) Identifying business needs and challenges Oh! Don't worry, we don’t have the intention of boring you with a lesson of typical IAM stories – we're sure you've been in touch with a lot of information in this area. However, you do need to have an independentview of the actual business needs and challenges in the cloud area, so that you can get the most out of your own situation. Common Identity and Access Management needs Identity and Access Management(IAM) is the disciplinethat plays an important role in the actual cloud era of your organization. It’s also of valueto smalland medium-sizedcompanies, so that they can enable the right individuals to access the right resources from any location and device, at the right time and for the right reasons, to empower and enable the desired business outcomes. IAM addresses the mission-critical need of ensuringappropriate and secure access to resources inside and across company borders,such as cloud or partner applications. The old security strategy of only securing your environment with an intelligent firewall concept and access control lists will take on a more and more subordinated role. There is arecommended requirement of reviewing and overworking this strategyin order to meet higher compliance and operational and business requirements. To adopt a mature security and risk management practice, it's very important that your IAMIAM strategy is business-aligned and that the required business skills and stakeholders are committed to this topic. Without clearly defined business processes you can't implement a successful IAM-functionality in the planned timeframe. Companies that follow this strategy can become more agile in supporting new business initiatives and reduce their costs inIAM. The following three groups show the typicalindicators for missing IAM capabilities on the premises and for cloud services: Your employees/partners: Same password usage across multiple applications without periodic changes (also in social media accounts) Multiple identities and logins Passwords are written downinSticky Notes, Excel, etc. Application and data access allowed after termination Forgotten usernames and passwords Poor usability of application access inside and outside the company (multiple logins, VPN connection required, incompatible devices, etc.) Your IT department: High workload on Password Reset Support Missing automated identity lifecycles with integrity (data duplication and data quality problems) No insights in application usage and security Missing reporting tools for compliance management Complex integration of central access to Software as a Service (SaaS), Partner and On-Premise applications (missing central access/authentication/authorization platform) No policy enforcement in cloud services usage Collection of access rights(missing processes) Your developers: Limited knowledge ofall the different security standards, protocols, and APIs Constantly changing requirements and rapid developments Complex changes of the Identity Provider Implications of Shadow IT On top of that, often the ITdepartment will hear the following question:When can we expect the new application for our business unit?Sorry, but the answer will always take too long. Why should I wait? All I need is a valid credit card that allows me to buy my required business application, butsuddenly another popular phenomenon pops up:The shadow IT!. Most of the time, this introduces another problem— uncontrolled information leakage. The following figure shows the flow of typical information — and that what you don't know can hurt! The previous figure should not give you the impression that cloud services are inherently dangerous, rather that before using them you should first be aware that,[R1]  and in which manner they are being used. Simplymigrating or ordering a new service in the cloud won't solve common IAMneeds. This figure should help you to imagine that, if not planned, the introduction of a new or migrated service brings with it a new identity and credential set for the users, and therefore multiple credentials and logins to remember! You should also be sure which information can be stored and processed in aregulatory area other than your own organization. The following table shows the responsibilities involved when using the different cloud service models. In particular, you should identify thatyou are responsible for data classification, IAM, and endpoint security in every model: Cloud Service Modell IaaS   PaaS   SaaS   Responsibility Customer Provider Customer Provider Customer Provider Data Classification X   X   X   End Point Security X   X   X   Identity and Access Management X   X X X X Application Security X   X X   X Network Controls X   X X   X Host Security   X   X   X Physical Security   X   X   X The mobile workforce and cloud-first strategy Many organizations are facing the challengeof meeting the expectations of a mobile workforce, all with their own device preferences, a mix of private and professional commitments, and the request to use social media as an additional way of business communication. Let's dive into a short, practical but extreme example. The AzureID company employs approximately 80 employees. They work with a SaaS landscape of eight services to drive all their business processes. On the premises, they use Network-Attached Storage(NAS) to store some corporate data and provide network printers to all of the employees. Some of the printers are directly attached to the C-level of the company. The main issues today are that the employees need to remember all their usernames and passwords of all the business applications, and if they want to share some information with partners they cannot give them partial access to the necessary information in a secure and flexible way. Another point is if they want to access corporate data from their mobile device, it's always a burden to provide every single login for the applications necessary to fulfil their job. The small IT department with oneFull-time Equivalent(FTE) is overloaded with having to create and manage every identity in each different service. In addition, users forget their passwords periodically, and most of the time outside normal business hours. The following figure shows the actual infrastructure: Let's analyze this extreme example to revealsome typical problems,so that you can match some ideas to your IT infrastructure: Provisioning, managing,and de-provisioning identities can be a time-consuming task There are no single identity and credentials for the employees There is no collaboration support for partner and consumer communication There is no Self-Service Password Reset functionality Sensitive information leaves the corporation over email There are no usage or security reports about the accessed applications/services There is no central way to enable multi-factor authentication for sensitive applications There is no secure strategy for accessing social media There is no usable, secure,and central remote access portal Remember, shifting applications and services to the cloud just introduces more implications/challenges, not solutions. First of all, you need your IAM functionality accurately in place.You also need to alwayshandle on-premises resourceswith minimal printer management. An overview of feature and licensing decisions With the cloud-first strategy of Microsoft, the Azure platform and their number of services grow constantly, and we have seen a lot of costumers lost in a paradise of wonderful services and functionality. This brings usto the point of how to figure out the relevant services for IAMfor you and how to give them the space for explanation. Obviously, there are more services available that stripe this field [R2] with a small subset of functionality, but due to the limited page count of this book and our need for  rapid development we will focus on the most important ones, and will reference any other interesting content. The primary service for IAMis the Azure Active Directory service, which has also been the core directory service for Office 365 since 2011. Every other SaaS offering of Microsoft is also based on this core service, including Microsoft Intune, Dynamics, and Visual Studio Online. So, if you are already an Office 365 customer you willhave your own instance of Azure Active Directory in place. For sustained access management and the protection of your information assets, the Azure Rights Management services are in place. There is also an option for Office 365 customers to use the included Azure Rights Management services. You can find further information about this by visiting the following link:http://bit.ly/1KrXUxz. Let's get started with the feature sets that can provide a solution, as shown in the following screenshot: Including Azure Active Directory and Rights Managementhelps you to provide a secure solution with a central access portal for all of your applications with just one identity and login for your employees, partners, and customers that you want to share your information with. With a few clicks you can also add multi-factor authentication to your sensitive applications and administrative accounts. Furthermore, you can directly add a Self-Service Password Reset functionality that your users can use to reset their passwordfor themselves. As the administrator, you will receive predefined security and usage reports to control your complete application ecosystem. To protect your sensible content, you will receive digital rights management capabilities with the Azure Rights Management services to give you granular access rights on every device your information is used. Doesn’t it sound great? Let's take a deeper look into the functionality and usage of the different Microsoft Azure IAMservices. Azure Active Directory Azure Active Directory is a fully managed multi-tenant service that provides IAMcapabilities as a service. This service is not just an instance of theWindows Server Domain Controller you already know from your actual Active Directory infrastructure. Azure AD is not a replacement for the Windows Server Active Directory either. If you already use a local Active Directory infrastructure, you can extend it to the cloud by integrating Azure AD to authenticate your users in the same way ason-premise and cloud services. Staying in the business view, we want to discuss some of the main features of Azure Active Directory. Firstly, we want to start with the Access panel that gives the user a central place to access all his applications from any device and any location with single-sign-on. The combination of the Azure Active Directory Access panel and the Windows Server 2012 R2/2016 Web Application Proxy / ADFS capabilities provide an efficient way to securely publish web applications and services to your employees, partners, and customers. It will be a good replacement for your retired Forefront TMG/UAG infrastructure. Over this portal,your users can do the following: User and group management Access their business relevant applications(On-premise, partner, and SaaS) with single-sign-on or single logon Delegation of access control to the data, process, or project owner Self-service profile editing for correcting or adding information Self-service password change and reset Management of registered devices With the Self-Service Password Reset functionality, a user gets a straight forward way to reset his password and to prove his identity, for example through a phone call, email, or by answering security questions. The different portals can be customized with your own Corporate Identity branding. To try the different portals, just use the following links: https://myapps.microsoft.com and https://passwordreset.microsoftonline.com. To complete our short introduction to the main features of the Azure Active Directory, we will take a look at the reporting capabilities. With this feature you get predefined reports with the following information provided. With viewing and acting on these reports, you are able to control your whole application ecosystem published over the Azure AD access panel. Anomaly reports Integrated application reports Error reports User-specific reports Activity logs From our discussions with customers we recognize that, a lot of time, the differences between the different Azure Active Directory editions are unclear. For that reason, we will include and explain the feature tables provided by Microsoft. We will start with the common features and then go through the premium features of Azure Active Directory. Common features First of all, we want to discuss the Access panel portal so we can clear up some open questions. With the Azure AD Free and Basic editions, you can provide a maximum of 10 applications to every user. However, this doesn't mean that you are limited to 10 applications in total. Next, the portal link: right now it cannot be changed to your own company-owned domain, such ashttps://myapps.inovit.ch. The only way you can do so is by providing an aliasin your DNS configuration; the accessed link is https://myapps.microsoft.com. Company branding will lead us on to the next discussion point, where we are often asked how much corporate identity branding is possible. The following link provides you with all the necessary information for branding your solution:http://bit.ly/1Jjf2nw. Rounding up this short Q&Aon the different feature sets isApplication Proxy usage, one of the important differentiators between the Azure AD Free and Basic editions. The short answer is that with Azure AD Free, you cannot publish on-premises applications and services over the Azure AD Access Panel portal. Features AAD Free AAD Basic AAD Premium Directory as a Service (objects) 500k unlimited unlimited User/Group management (UI or PowerShell) X X X Access Panel portal for SSO (per user) 10 apps 10 apps unlimited User-based application access management/provisioning X X X Self-service password change (cloud users) X X X Directory synchronization tool X X X Standard security reports X X X High availability SLA (99.9%)   X X Group-based application access management and provisioning   X X Company branding   X X Self-service password reset for cloud users   X X Application Proxy   X X Self-service group management for cloud users   X X Premium features The Azure Active Directory Premium edition provides you with the entireIAMcapabilities, including the usage licenses of the on-premises used Microsoft Identity Manager. From a technical perspective, you need to use the Azure AD Connect utility to connect your on-premises Active Directory with the cloud and the Microsoft Identity Manager to manage your on-premises identities and prepare them for your cloud integration. To acquire Azure AD Premium, you can also use the Enterprise Mobility Suite(EMS) bundle, which contains Azure AD Premium, Azure Rights Management, Microsoft Intune,and Advanced Threat Analytics(ATA) licensing. You can find more information about EMS by visitinghttp://bit.ly/1cJLPcM and http://bit.ly/29rupF4. Features       Azure AD Premium Self-service password reset with on-premiseswrite-back     X Microsoft Identity Manager server licenses     X Advanced anomaly security reports     X Advanced usage reporting     X Multi-Factor Authentication (cloud users)     X Multi-Factor Authentication (on-premises users)     X Azure AD Premium reference: http://bit.ly/1gyDRoN Multi-Factor Authentication for cloud users is also included in Office 365. The main difference is that you cannot use it for on-premises users and services such as VPN or web servers. Azure Active Directory Business to Business One of the newest features based on Azure Active Directory is the new Business to Business(B2B) capability. The new product solves the problem of collaboration between business partners. It allows users to share business applications between partners without going through inter-company federation relationships and internally-managed partner identities. With Azure AD B2B, you can create cross-company relationships by inviting and authorizing users from partner companies to access your resources. With this process,each company federates once with Azure AD and each user is then represented by a single Azure AD account. This option also provides a higher security level, because if a user leaves the partner organization,access is automatically disallowed. Inside Azure AD, the user will be handled as though a guest, and they will not be able to traverse other users in the directory. Real permissions will be provided over the correct associated group membership. Azure Active Directory Business to Consumer Azure Active Directory Business to Consumer(B2C) is another brand new feature based on Azure Active Directory. This functionality supports signing in to your application using social networks like Facebook, Google, or LinkedIn and creating accounts with usernames and passwords specifically for your company-owned application. Self-service password management and profile management are also provided with this scenario. Additionally, Multi-Factor Authentication introduces a higher grade of security to the solution. Principally, this feature allows small and medium companies to hold their customers in a separated Azure Active Directory with all the capabilities, and more,in a similar way to the corporate-managed Azure Active Directory. With different verification options, you are also able to provide the necessary identity assurance required for more sensible transactions. Azure Active Directory Privileged Identity Management Azure AD Privileged Identity Management provides you the functionality to manage, control, and monitor your privileged identities. With this option, you can build up an RBAC solution over your Azure AD and other Microsoft online services, such as Office 365 or Microsoft Intune. The following activities can be reached with this functionality: You can discover the actual configured Azure AD Administrators You can providejust in time administrative access You can get reports about administrator access history and assignment changes You can receive alerts about access to a privileged role The following built-in roles can be managed with the current version: Global Administrator Billing Administrator Service Administrator User Administrator Password Administrator Azure Multi-Factor Authentication Protecting sensible information or application access with additional authentication is an important task not just in the on-premises world. In particular, it needs to be extended to every used sensible cloud service. There are a lot of variations for providing this level of security and additional authentication, such ascertificates, smart cards, or biometric options. For example, smart cards have  a dependency on special hardware used to read the smart card and cannot be used in every scenario without limiting the access to a special device or hardware or. The following table gives you an overview of different attacks and how they can be mitigated with a well designed and implemented security solution. Attacker Security solution Password brute force Strong password policies Shoulder surfing Key or screen logging One-time password solution Phishing or pharming Server authentication (HTTPS) Man-in-the-Middle Whaling (Social engineering) Two-factor authentication Certificate or one-time password solution Certificate authority corruption Cross Channel Attacks (CSRF) Transaction signature and verification Non repudiation Man-in-the-Browser Key loggers Secure PIN entry Secure messaging Browser (read only) Push button (token) Three-factor authentication The Azure Multi-Factor Authentication functionality has been included in the Azure Active Directory Premium capabilities to address exactly the attacks described in the previous table. With a one-time password solution, you can build a very capable security solution to access information or applications from devices that cannot use smart cards as the additional authentication method. Otherwise, for small or medium business organizations a smart card deployment, including the appropriate management solution will, be too cost-intensive and the Azure MFA solution can be a good alternative for reaching the expected higher security level. In discussions with our customers, we recognized that a lot don't realize that Azure MFA is already included in different Office 365 plans. They would be able to protect their Office 365 with multi-factor out-of-the-box but they don't know it! This brings us to Microsoft and the following table, which compares the functionality between Office 365 and Azure MFA. Feature O365 Azure Administrators can enable/enforce MFA to end-users X X Use mobile app (online and OTP) as second authentication factor X X Use phone call as second authentication factor X X Use SMS as second authentication factor X X App passwords for non-browser clients (for example, Outlook, Lync) X X Default Microsoft greetings during authentication phone calls X X Remember Me X X IP Whitelist   X Custom greetings during authentication phone calls   X Fraud alert   X Event confirmation   X Security reports   X Block/unblock users   X One-time bypass   X Customizable caller ID for authentication phone calls   X MFA Server – MFA for on-premises applications   X MFA SDK – MFA for custom apps   X With the Office 365 capabilities of MFA, the administrators are able to use basic functionality to protect their sensible information. In particular,if integrating on-premises users and services, the Azure MFA solution is needed. Azure MFA and the on-premises installation of the MFA server cannot be used to protect your Windows Server DirectAccess implementation. Furthermore, you will find the customizable caller ID limited to special regions. Azure Rights Management More and more organizations are in the position to provide a continuous and integrated information protection solution to protect sensible assets and information. On one side stands the department, which carries out its business activities, generates the data, and then processes. Furthermore, it uses the data inside and outside the functional areas, passes it, and runs a lively exchange of information. On the other hand, revision is requiredby legal requirements that prescribe measures to ensure that information is dealt with and  dangers such as industrial espionage and data loss are avoided. So,thisis a big concern when safeguarding sensitive information. While the staff appreciate the many ways of communication and data exchange, this development starts stressing the IT security officers and makes the managers worried. The fear is that critical corporate data staysin an uncontrolled manner and leaves the company or moves to competitors. The routes are varied, but data is often lost ininadvertent delivery via email. In addition, sensitive data can leave the company on a USB stick and smartphone, or IT media can be lost or stolen. In addition, new risks are added, such as employees posting information on social media platforms.IT must ensure the protection of data in all phases, and traditional IT security solutions are not always sufficient. The following figure illustrates this situation and leads us to the Azure Rights Management services. Like itsother additional features, the base functional is included in different Office 365 plans. The main difference between the two is  that only the Azure RMS edition can be integrated in an on-premises file server environment in order to be able to use the File Classification Infrastructure feature of the Windows Server file server role. The Azure RMS capability allows you to protect your sensitive information based on classification information with a granular access control system. The following table provided from Microsoft shows the differences between the Office 365 and Azure RMS functionality.Azure RMS is included with E3, E4, A3, and A4 plans. Feature RMS O365 RMS Azure Users can create and consume protected content by using Windows clients and Office applications X X Users can create and consume protected content by using mobile devices X X Integrates with Exchange Online, SharePoint Online, and OneDrive for Business X X Integrates with Exchange Server 2013/Exchange Server 2010 and SharePoint Server 2013/SharePoint Server 2010 on-premises via the RMS connector X X Administrators can create departmental templates X X Organizations can create and manage their own RMS tenant key in a hardware security module (the Bring Your Own Key solution) X X Supports non-Office file formats: Text and image files are natively protected; other files are generically protected X X RMS SDK for all platforms: Windows, Windows Phone, iOS, Mac OSX, and Android X X Integrates with Windows file servers for automatic protection with FCI via the RMS connector   X Users can track usage of their documents   X Users can revoke access to their documents   X In particular, the tracking feature helps users to find where their documents are distributed and allows them to revoke access to a single protected document. Microsoft Azure security services in combination Now that we have discussed the relevant Microsoft Azure IAMcapabilities, you can see that Microsoft provides more than just single features or subsets of functionality. Furthermore, it brings a whole solution to the market, which provides functionality for every facet of IAM. Microsoft Azure also combines clear service management with IAM, making it a rich solution for your organization. You can work with that toolset in a native cloud-first scenario, hybrid, and a complex hybrid scenario and can extend your solution to every possible use case or environment. The following figure illustrates all the different topics that are covered with Microsoft Azure security solutions: Defining the benefits and costs The Microsoft Azure IAMcapabilities help you to empower your users with a flexible and rich solution that enables better business outcomes in a more productive way. You help your organization to improve the regulatory compliance overall and reduce the information security risk. Additionally, it can be possible to reduce IT operating and development costs by providing higher operating efficiencyand transparency. Last but not least,it will lead toimproved user satisfaction and better support from the business for further investments. The following toolset gives you very good instruments for calculatingthe costs of your special environment. Azure Active Directory Pricing Calculator:http://bit.ly/1fspdhz Enterprise Mobility Suite Pricing:http://bit.ly/1V42RSk Microsoft Azure Pricing Calculator:http://bit.ly/1JojUfA Principles of security and legal requirements The classification of data, such as business information or personal data, is not only necessary for an on-premises infrastructure. It is a basis for the assurance of business-related information and is represented by compliance with official regulations. These requirements are ofgreater significance when using cloud services or solutions outside your own company and regulation borders. They are clearly needed for a controlled shift of data in an area in which responsibilities on contracts must be regulated. Safety limits do not stop at the private cloud, and are responsible for the technical and organizational implementation and control of security settings. The subsequent objectives are as follows: Construction, extension, or adaptation of the data classification to the Cloud Integration Data classification as a basis for encryption or isolated security silos Data classification as a basis for authentication and authorization Microsoft itself has strict controls that restrict access to Azure to Microsoft employees. Microsoft also enables customers to control access to their Azure environments, data, and applications, as well as allowing them to penetrate and audit services with special auditors and regulations on request. A statement from Microsoft: Customers will only use cloud providers in which they have great trust. They must trust that the privacy of their information will be protected, and that their data will be used in a way that is consistent with their expectations.We build privacy protections into Azure through Privacy by Design. You can get all the necessary information about security, compliance, and privacy by visiting the following link http://bit.ly/1uJTLAT. Summary Now that you are fully clued up with information about typical needs and challenges andfeature and licensing information, you should be able to apply the right technology and licensing model to your cloud-only scenario. You should also be aware of the benefits and cost calculators that will help you calculate a basic price model for your required services. Furthermore, you canalso decide which security and legal requirements are relevant for your cloud-only environments. Resources for Article: Further resources on this subject: Creating Multitenant Applications in Azure [article] Building A Recommendation System with Azure [article] Installing and Configuring Windows Azure Pack [article]
Read more
  • 0
  • 0
  • 10914

article-image-computer-vision-keras-part-2
Sasank Chilamkurthy
13 Sep 2016
6 min read
Save for later

Computer Vision with Keras, Part 2

Sasank Chilamkurthy
13 Sep 2016
6 min read
If you were following along in Part 1, you will have seen how we used Keras to create our model for tackling The German Traffic Sign Recognition Benchmark(GTSRB). Now in Part 2 you will see how we achieve performance close to human-level performance. You will also see how to improve the accuracy of the model using augmentation of the training data. Training Now, our model is ready to train. During the training, our model will iterate over batches of the training set, each of size batch_size. For each batch, gradients will be computed and updates will be made to the weights of the network automatically. One iteration over all of the training set is referred to as an epoch. Training is usually run until the loss converges to a constant. We will add a couple of features to our training: Learning rate scheduler: Decaying learning rate over the epochs usually helps the model learn better. Model checkpoint: We will save the model with best validation accuracy. This is useful because our network might start overfitting after a certain number of epochs, but we want the best model. These are not necessary but they improve the model accuracy. These features are implemented via the callback feature of Keras. callback are a set of functions that will applied at given stages of training procedure like end of an epoch of training. Keras provides inbuilt functions for both learning rate scheduling and model checkpointing. fromkeras.callbacks import LearningRateScheduler, ModelCheckpoint deflr_schedule(epoch): returnlr*(0.1**int(epoch/10)) batch_size = 32 nb_epoch = 30 model.fit(X, Y, batch_size=batch_size, nb_epoch=nb_epoch, validation_split=0.2, callbacks=[LearningRateScheduler(lr_schedule), ModelCheckpoint('model.h5',save_best_only=True)] ) You'll see that model starts training and logs the losses and accuracies: Train on 31367 samples, validate on 7842 samples Epoch 1/30 31367/31367 [==============================] - 30s - loss: 1.1502 - acc: 0.6723 - val_loss: 0.1262 - val_acc: 0.9616 Epoch 2/30 31367/31367 [==============================] - 32s - loss: 0.2143 - acc: 0.9359 - val_loss: 0.0653 - val_acc: 0.9809 Epoch 3/30 31367/31367 [==============================] - 31s - loss: 0.1342 - acc: 0.9604 - val_loss: 0.0590 - val_acc: 0.9825 ... Now this might take a bit of time, especially if you are running on a CPU. If you have anNvidiaGPU, you should install cuda. It speeds up the training dramatically. For example, on my Macbook air, it takes 10 minutes per epoch while on a machine with Nvidia Titan X GPU, it takes 30 seconds. Even modest GPUs offer impressive speedup because of the inherent parallelizability of the neural networks. This makes GPUs necessary for deep learning if anything big has to be done. Grab a coffee while you wait for training to complete ;). Congratulations! You have just trained your first deep learning model. Evaluation Let's quickly load test data and evaluate our model on it: import pandas as pd test = pd.read_csv('GT-final_test.csv',sep=';') # Load test dataset X_test = [] y_test = [] i = 0 forfile_name, class_id in zip(list(test['Filename']), list(test['ClassId'])): img_path = os.path.join('GTSRB/Final_Test/Images/',file_name) X_test.append(preprocess_img(io.imread(img_path))) y_test.append(class_id) X_test = np.array(X_test) y_test = np.array(y_test) # predict and evaluate y_pred = model.predict_classes(X_test) acc = np.sum(y_pred==y_test)/np.size(y_pred) print("Test accuracy = {}".format(acc)) Which outputs on my system (Results may change a bit because the weights of the neural network are randomly initialized): 12630/12630 [==============================] - 2s Test accuracy = 0.9792557403008709 97.92%! That's great! It's not far from average human performance (98.84%)[1]. A lot of things can be done to squeeze out extra performance from the neural net. I'll implement one such improvement in the next section. Data Augmentation You might think 40000 images is a lot of images. Think about it again. Our model has 1358155 parameters (try model.count_params() or model.summary()). That's 4X the number of training images. If we can generate new images for training from the existing images, that will be a great way to increase the size of the dataset. This can be done by slightly: Translating theimage Rotating theimage Shearing the image Zooming in/out of the image Rather than generating and saving such images to hard disk, we will generate them on the fly during training. This can be done directly using built-in functionality of Keras. fromkeras.preprocessing.image import ImageDataGenerator fromkeras.preprocessing.image import ImageDataGenerator fromsklearn.cross_validation import train_test_split X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2, random_state=42) datagen = ImageDataGenerator(featurewise_center=False, featurewise_std_normalization=False, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2, shear_range=0.1, rotation_range=10.,) datagen.fit(X_train) # Reinitialize model and compile model = cnn_model() model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) # Train again nb_epoch = 30 model.fit_generator(datagen.flow(X_train, Y_train, batch_size=batch_size), samples_per_epoch=X_train.shape[0], nb_epoch=nb_epoch, validation_data=(X_val, Y_val), callbacks=[LearningRateScheduler(lr_schedule), ModelCheckpoint('model.h5',save_best_only=True)] ) With this model, I get 98.29% accuracy on the test set. Frankly, I haven't done much parameter tuning. I'll make a small list of things which can be tried to improve the model: Try different network architectures. Try deeper and shallower networks Try adding BatchNormalization layers to the network Experiment with different weight initializations Try different learning rates and schedules Make an ensemble of models Try normalization of input images More aggressive data augmentation This is but a model for beginners. For state-of-the-art solutions of the problem, you can have a look at this, where the authors achieve 99.61% accuracy with a specialized layer called Spatial Transformer layer. Conclusion In this two-part post, you have learned how to use convolutional networks to solve a computer vision problem. We used the Keras deep learning framework to implement CNNs in Python. We have achieved performance close to human-level performance. We also have seen a way to improve the accuracy of the model using augmentation of the training data. References: Stallkamp, Johannes, et al. "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition." Neural networks 32 (2012): 323-332. About the author Sasank Chilamkurthy works at Qure.ai. His work involves deep learning on medical images obtained from radiology and pathology. He completed his UG in Mumbai at the Indian Institute of Technology, Bombay. He can be found on Github at here.
Read more
  • 0
  • 0
  • 2649

article-image-how-build-your-own-futuristic-robot
Packt
13 Sep 2016
5 min read
Save for later

How to Build your own Futuristic Robot

Packt
13 Sep 2016
5 min read
In this article by Richard Grimmett author of the book Raspberry Pi Robotic Projects - Third Edition we will start with simple but impressive project where you'll take a toy robot and give it much more functionality. You'll start with an R2D2 toy robot and modify it to add a web cam, voice recognition, and motors so that it can get around. Creating your own R2D2 will require a bit of mechanical work, you'll need a drill and perhaps a Dremel tool, but most of the mechanical work will be removing the parts you don't need so you can add some exciting new capabities. (For more resources related to this topic, see here.) Modifying the R2D2 There are several R2D2 toys that can provide the basis for this project. Both are available from online retailers. This project will use one that is both inexpensive but also provides such interesting features as a top that turns and a wonderful place to put a webcam. It is the Imperial Toy R2D2 bubble machine. Here is a picture of the unit: The unit can be purchased at amazon.com, toyrus.com, and a number of other retailers. It is normally used as a bubble machine that uses a canister of soap bubbles to produce bubbles, but you'll take all of that capability out to make your R2D2 much more like the original robot. Adding wheels and motors In order to make your R2D2 a reality the first thing you'll want to do is add wheels to the robot. In order to do this you'll need to take the robot apart, separating the two main plastic pieces that make up the lower body of the robot. Once you have done this both the right and left arms can be removed from the body. You'll need to add two wheels that are controlled by DC motors to these arms. Perhaps the best way to do this is to purchase a simple, two-wheeled car that is available at many online electronics stores like amazon.com, ebay.com, or bandgood.com. Here is a picture of the parts that come with the car: You'll be using these pieces to add mobility to your robot.  The two yellow pieces are dc motors. So, let's start with those. To add these to the two arms on either side of the robot, you'll need to separate the two halves of the arm, and then remove material from one of the halves, like this: You can use a Dremel tool to do this, or any kind of device that can cut plastic. This will leave a place for your wheel. Now you'll want to cut the plastic kit of your car up to provide a platform to connect to your R2D2 arm. You'll cut your plastic car up using this as a pattern, you'll want to end up with the two pieces that have the + sign cutouts, and this is where you'll mount your wheels and also the piece you'll attach to the R2D2 arm. The image below will help you understand this better. On the cut out side that has not been removed, mark and drill two holes to fix the clear plastic to the bottom of the arm. Then fix the wheel to the plastic, then the plastic to the bottom of the arm as shown in the picture. You'll connect two wires, one to each of the polarities on the motor, and then run the wires up to the top of the arm and out the small holes. These wires will eventually go into the body of the robot through small holes that you will drill where the arms connect to the body, like this: You'll repeat this process for the other arm. For the third, center arm, you'll want to connect the small, spinning white wheel to the bottom of the arm. Here is a picture: Now that you have motors and wheels connected to the bottom of arms you'll need to connect these to the Raspberry Pi. There are several different ways to connect and drive these two DC motors, but perhaps the easiest is to add a shield that can directly drive a DC motor. This motor shield is an additional piece of hardware that installs on the top of Raspberry Pi and can source the voltage and current to power both motors. The RaspiRobot Board V3 is available online and can provide these signals. The specifics on the board can be found at http://www.monkmakes.com/rrb3/. Here is a picture of the board: The board will provide the drive signals for the motors on each of the wheels. The following are the steps to connect Raspberry Pi to the board: First, connect the battery power connector to the power connector on the side of the board. Next, connect the two wires from one of the motors to the L motor connectors on the board. Connect the other two wires from the other motor to the R motor connectors on the board. Once completed your connections should look like this: The red and black wires go to the battery, the green and yellow to left motor, the blue and white to the right motor. Now you will be able to control both the speed and the direction of the motors through the motor control board. Summary Thus we have covered some aspect of building first project, your own R2D2. You can now move it around, program it to respond to voice commands, or run it remotely from a computer, tablet or phone. Following in this theme your next robot will look and act like WALL-E. Resources for Article: Further resources on this subject: The Raspberry Pi and Raspbian [article] Building Our First Poky Image for the Raspberry Pi [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 23414

article-image-creating-slack-progress-bar
Bradley Cicenas
13 Sep 2016
4 min read
Save for later

Creating a Slack Progress Bar

Bradley Cicenas
13 Sep 2016
4 min read
There are many ways you can customize your Slack program, but with a little bit of programming you can create things that can provide you with all sorts of productivity-enhancing options. In this post, you are going to learn how to use Python to create a custom progress bar that will run in your very own Slack channel. You can use this progress bar in so many different ways. We won’t get into the specifics beyond the progress bar, but one idea would be to use it to broadcast the progress of a to-do list in Trello. That is something that is easy to set up in Slack. Let’s get started. Real-time notifications are a great way to follow along with the events of the day. For ever-changing and long-running events though, a stream of update messages quickly becomes spammy and overwhelming. If you make a typo in a sent message, or want to append additional details, you know that you can go back and edit it without sending a new chat(and subsequently a new notification for those in the channel); so why can't your bot friends do the same? As you may have guessed by now, they can! This program will make use of Slack’s chat.update method to create a dynamic progress bar with just one single-line message. Save the below script to a file namedprogress.py, updating the SLACK_TOKEN and SLACK_CHANNEL variables at the top with your configured token and desired channel name. from time import sleep from slacker import Slacker from random importrandint SLACK_TOKEN = '<token>' SLACK_CHANNEL = '#general' classSlackSimpleProgress(object): def__init__(self, slack_token, slack_channel): self.slack = Slacker(slack_token) self.channel = slack_channel res = self.slack.chat.post_message(self.channel, self.make_bar(0)) self.msg_ts = res.body['ts'] self.channel_id = res.body['channel'] def update(self, percent_done): self.slack.chat.update(self.channel_id, self.msg_ts, self.make_bar(percent_done)) @staticmethod defmake_bar(percent_done): return'%s%s%%' % ((round(percent_done / 5) * chr(9608)), percent_done) if__name__ == '__main__': progress_bar = SlackSimpleProgress(SLACK_TOKEN, SLACK_CHANNEL) # initialize the progress bar percent_complete = 1 whilepercent_complete<100: progress.update(percent_complete) percent_complete += randint(1,10) sleep(1) progress_bar.update(100) Run the script with a simple python progress.py and, like magic, your progress bar will appear in the channel, updating at regular intervals until completion:   How It Works In the last six lines of the script: A progress bar is initially created with SlackSimpleProgress. The update method is used to refresh the current position of the bar once a second. percent_complete is slowly incremented by a random amount using Python’s random.randint. We set the progress bar to 100% when completed. Extending With the basic usage down, the same script can be updated to serve a real purpose. Say we're copying a large amount of files between directories for a backup—we'll replace the __main__ part of the script with: importos import sys importshutil src_path = sys.argv[0] dest_path = sys.argv[1] all_files = next(os.walk('/usr/lib'))[2] progress_bar = SlackSimpleProgress(SLACK_TOKEN, SLACK_CHANNEL) # initialize the progress bar files_copied = 0 forfile in all_files: src_file = '%s/%s' % (src_path, file) dest_file = '%s/%s' % (dest_path, file) print('copying %s to %s' % (src_file, dest_file)) # print the file being copied shutil.copy2(src_file, dest_file) files_copied += 1# increment files copied percent_done = files_copied / len(all_files) * 100# calculate percent done progress_bar.update(percent_done) The script can now be run with two arguments: python progress.py /path/to/files /path/to/destination. You'll see the name of each file copied in the output of the script, and your teammates will be able to follow along with the status of the copy as it progresses in your Slack channel! I hope you find many uses for ways to implement this progress bar in Slack! About the author Bradley Cicenas is a New York City-based infrastructure engineer with an affinity for microservices, systems design, data science, and stoops.
Read more
  • 0
  • 0
  • 9137
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gearing-bootstrap-4
Packt
12 Sep 2016
28 min read
Save for later

Gearing Up for Bootstrap 4

Packt
12 Sep 2016
28 min read
In this article by Benjamin Jakobus and Jason Marah, the authors of the book Mastering Bootstrap 4, we will be discussing the key points about Bootstrap as a web development framework that helps developers build web interfaces. Originally conceived at Twitter in 2011 by Mark Otto and Jacob Thornton, the framework is now open source and has grown to be one of the most popular web development frameworks to date. Being freely available for private, educational, and commercial use meant that Bootstrap quickly grew in popularity. Today, thousands of organizations rely on Bootstrap, including NASA, Walmart, and Bloomberg. According to BuiltWith.com, over 10% of the world's top 1 million websites are built using Bootstrap (http://trends.builtwith.com/docinfo/Twitter-Bootstrap). As such, knowing how to use Bootstrap will be an important skill and serve as a powerful addition to any web developer’s tool belt. (For more resources related to this topic, see here.) The framework itself consists of a mixture of JavaScript and CSS, and provides developers with all the essential components required to develop a fully functioning web user interface. Over the course of the book, we will be introducing you to all of the most essential features that Bootstrap has to offer by teaching you how to use the framework to build a complete website from scratch. As CSS and HTML alone are already the subject of entire books in themselves, we assume that you, the reader, has at least a basic knowledge of HTML, CSS, and JavaScript. We begin this article by introducing you to our demo website—MyPhoto. This website will accompany us throughout the book, and serve as a practical point of reference. Therefore, all lessons learned will be taught within the context of MyPhoto. We will then discuss the Bootstrap framework, listing its features and contrasting the current release to the last major release (Bootstrap 3). Last but not least, this article will help you set up your development environment. To ensure equal footing, we will guide you towards installing the right build tools, and precisely detail the various ways in which you can integrate Bootstrap into a project. To summarize, this article will do the following: Introduce you to what exactly we will be doing Explain what is new in the latest version of Bootstrap, and how the latest version differs to the previous major release Show you how to include Bootstrap in our web project Introducing our demo project The book will teach you how to build a complete Bootstrap website from scratch. We will build and improve the website's various sections as we progress through the book. The concept behind our website is simple. To develop a landing page for photographers. Using this landing page, (hypothetical) users will be able to exhibit their wares and services. While building our website, we will be making use of the same third-party tools and libraries that you would if you were working as a professional software developer. We chose these tools and plugins specifically because of their widespread use. Learning how to use and integrate them will save you a lot of work when developing websites in the future. Specifically, the tools that we will use to assist us throughout the development of MyPhoto are Bower, node package manager (npm) and Grunt. From a development perspective, the construction of MyPhoto will teach you how to use and apply all of the essential user interface concepts and components required to build a fully functioning website. Among other things, you will learn how to do the following: Use the Bootstrap grid system to structure the information presented on your website. Create a fixed, branded, navigation bar with animated scroll effects. Use an image carousel for displaying different photographs, implemented using Bootstrap's carousel.js and jumbotron (jumbotron is a design principle for displaying important content). It should be noted that carousels are becoming an increasingly unpopular design choice, however, they are still heavily used and are an important feature of Bootstrap. As such, we do not argue for or against the use of carousels as their effectiveness depends very much on how they are used, rather than on whether they are used. Build custom tabs that allow users to navigate across different contents. Use and apply Bootstrap's modal dialogs. Apply a fixed page footer. Create forms for data entry using Bootstrap's input controls (text fields, text areas, and buttons) and apply Bootstrap's input validation styles. Make best use of Bootstrap's context classes. Create alert messages and learn how to customize them. Rapidly develop interactive data tables for displaying product information. How to use drop-down menus, custom fonts, and icons. In addition to learning how to use Bootstrap 4, the development of MyPhoto will introduce you to a range of third-party libraries such as Scrollspy (for scroll animations), SalvattoreJS (a library for complementing our Bootstrap grid), Animate.css (for beautiful CSS animations, such as fade-in effects at https://daneden.github.io/animate.css/) and Bootstrap DataTables (for rapidly displaying data in tabular form). The website itself will consist of different sections: A Welcome section An About section A Services section A Gallery section A Contact Us section The development of each section is intended to teach you how to use a distinct set of features found in third-party libraries. For example, by developing the Welcome section, you will learn how to use Bootstrap's jumbotron and alert dialogs along with different font and text styles, while the About section will show you how to use cards. The Services section of our project introduces you to Bootstrap's custom tabs. That is, you will learn how to use Bootstrap's tabs to display a range of different services offered by our website. Following on from the Services section, you will need to use rich imagery to really show off the website's sample services. You will achieve this by really mastering Bootstrap's responsive core along with Bootstrap's carousel and third-party jQuery plugins. Last but not least, the Contact Us section will demonstrate how to use Bootstrap's form elements and helper functions. That is, you will learn how to use Bootstrap to create stylish HTML forms, how to use form fields and input groups, and how to perform data validation. Finally, toward the end of the book, you will learn how to optimize your website, and integrate it with the popular JavaScript frameworks AngularJS (https://angularjs.org/) and React (http://facebook.github.io/react/). As entire books have been written on AngularJS alone, we will only cover the essentials required for the integration itself. Now that you have glimpsed a brief overview of MyPhoto, let’s examine Bootstrap 4 in more detail, and discuss what makes it so different to its predecessor. Take a look at the following screenshot: Figure 1.1: A taste of what is to come: the MyPhoto landing page. What Bootstrap 4 Alpha 4 has to offer Much has changed since Twitter’s Bootstrap was first released on August 19th, 2011. In essence, Bootstrap 1 was a collection of CSS rules offering developers the ability to lay out their website, create forms, buttons, and help with general appearance and site navigation. With respect to these core features, Bootstrap 4 Alpha 4 is still much the same as its predecessors. In other words, the framework's focus is still on allowing developers to create layouts, and helping to develop a consistent appearance by providing stylings for buttons, forms, and other user interface elements. How it helps developers achieve and use these features however, has changed entirely. Bootstrap 4 is a complete rewrite of the entire project, and, as such, ships with many fundamental differences to its predecessors. Along with Bootstrap's major features, we will be discussing the most striking differences between Bootstrap 3 and Bootstrap 4 in the sub sections below. Layout Possibly the most important and widely used feature is Bootstrap's ability to lay out and organize your page. Specifically, Bootstrap offers the following: Responsive containers. Responsive breakpoints for adjusting page layout in response to differing screen sizes. A 12 column grid layout for flexibly arranging various elements on your page. Media objects that act as building blocks and allow you to build your own structural components. Utility classes that allow you to manipulate elements in a responsive manner. For example, you can use the layout utility classes to hide elements, depending on screen size. Content styling Just like its predecessor, Bootstrap 4 overrides the default browser styles. This means that many elements, such as lists or headings, are padded and spaced differently. The majority of overridden styles only affect spacing and positioning, however, some elements may also have their border removed. The reason behind this is simple. To provide users with a clean slate upon which they can build their site. Building on this clean slate, Bootstrap 4 provides styles for almost every aspect of your webpage such as buttons (Figure 1.2), input fields, headings, paragraphs, special inline texts, such as keyboard input (Figure 1.3), figures, tables, and navigation controls. Aside from this, Bootstrap offers state styles for all input controls, for example, styles for disabled buttons or toggled buttons. Take a look at the following screenshot: Figure 1.2: The six button styles that come with Bootstrap 4 are btn-primary,btn-secondary, btn-success,btn-danger, btn-link,btn-info, and btn-warning. Take a look at the following screenshot: Figure 1.3: Bootstrap's content styles. In the preceding example, we see inline styling for denoting keyboard input. Components Aside from layout and content styling, Bootstrap offers a large variety of reusable components that allow you to quickly construct your website's most fundamental features. Bootstrap's UI components encompass all of the fundamental building blocks that you would expect a web development toolkit to offer: Modal dialogs, progress bars, navigation bars, tooltips, popovers, a carousel, alerts, drop-down menu, input groups, tabs, pagination, and components for emphasizing certain contents. Let's have a look at the following modal dialog screenshot: Figure 1.4: Various Bootstrap 4 components in action. In the screenshot above we see a sample modal dialog, containing an info alert, some sample text, and an animated progress bar. Mobile support Similar to its predecessor, Bootstrap 4 allows you to create mobile friendly websites without too much additional development work. By default, Bootstrap is designed to work across all resolutions and screen sizes, from mobile, to tablet, to desktop. In fact, Bootstrap's mobile first design philosophy implies that its components must display and function correctly at the smallest screen size possible. The reasoning behind this is simple. Think about developing a website without consideration for small mobile screens. In this case, you are likely to pack your website full of buttons, labels, and tables. You will probably only discover any usability issues when a user attempts to visit your website using a mobile device only to find a small webpage that is crowded with buttons and forms. At this stage, you will be required to rework the entire user interface to allow it to render on smaller screens. For precisely this reason, Bootstrap promotes a bottom-up approach, forcing developers to get the user interface to render correctly on the smallest possible screen size, before expanding upwards. Utility classes Aside from ready-to-go components, Bootstrap offers a large selection of utility classes that encapsulate the most commonly needed style rules. For example, rules for aligning text, hiding an element, or providing contextual colors for warning text. Cross-browser compatibility Bootstrap 4 supports the vast majority of modern browsers, including Chrome, Firefox, Opera, Safari, Internet Explorer (version 9 and onwards; Internet Explorer 8 and below are not supported), and Microsoft Edge. Sass instead of Less Both Less and Sass (Syntactically Awesome Stylesheets) are CSS extension languages. That is, they are languages that extend the CSS vocabulary with the objective of making the development of many, large, and complex style sheets easier. Although Less and Sass are fundamentally different languages, the general manner in which they extend CSS is the same—both rely on a preprocessor. As you produce your build, the preprocessor is run, parsing the Less/Sass script and turning your Less or Sass instructions into plain CSS. Less is the official Bootstrap 3 build, while Bootstrap 4 has been developed from scratch, and is written entirely in Sass. Both Less and Sass are compiled into CSS to produce a single file, bootstrap.css. It is this CSS file that we will be primarily referencing throughout this book (with the exception of Chapter 3, Building the Layout). Consequently, you will not be required to know Sass in order to follow this book. However, we do recommend that you take a 20 minute introductory course on Sass if you are completely new to the language. Rest assured, if you already know CSS, you will not need more time than this. The language's syntax is very close to normal CSS, and its elementary concepts are similar to those contained within any other programming language. From pixel to root em Unlike its predecessor, Bootstrap 4 no longer uses pixel (px) as its unit of typographic measurement. Instead, it primarily uses root em (rem). The reasoning behind choosing rem is based on a well known problem with px, that is websites using px may render incorrectly, or not as originally intended, as users change the size of the browser's base font. Using a unit of measurement that is relative to the page's root element helps address this problem, as the root element will be scaled relative to the browser's base font. In turn, a page will be scaled relative to this root element. Typographic units of measurement Simply put, typographic units of measurement determine the size of your font and elements. The most commonly used units of measurement are px and em. The former is an abbreviation for pixel, and uses a reference pixel to determine a font's exact size. This means that, for displays of 96 dots per inch (dpi), 1 px will equal an actual pixel on the screen. For higher resolution displays, the reference pixel will result in the px being scaled to match the display's resolution. For example, specifying a font size of 100 px will mean that the font is exactly 100 pixels in size (on a display with 96 dpi), irrespective of any other element on the page. Em is a unit of measurement that is relative to the parent of the element to which it is applied. So, for example, if we were to have two nested div elements, the outer element with a font size of 100 px and the inner element with a font size of 2 em, then the inner element's font size would translate to 200 px (as in this case 1 em = 100 px). The problem with using a unit of measurement that is relative to parent elements is that it increases your code's complexity, as the nesting of elements makes size calculations more difficult. The recently introduced rem measurement aims to address both em's and px's shortcomings by combining their two strengths—instead of being relative to a parent element, rem is relative to the page's root element. No more support for Internet Explorer 8 As was already implicit in the feature summary above, the latest version of Bootstrap no longer supports Internet Explorer 8. As such, the decision to only support newer versions of Internet Explorer was a reasonable one, as not even Microsoft itself provides technical support and updates for Internet Explorer 8 anymore (as of January 2016). Furthermore, Internet Explorer 8 does not support rem, meaning that Bootstrap 4 would have been required to provide a workaround. This in turn would most likely have implied a large amount of additional development work, with the potential for inconsistencies. Lastly, responsive website development for Internet Explorer 8 is difficult, as the browser does not support CSS media queries. Given these three factors, dropping support for this version of Internet Explorer was the most sensible path of action. A new grid tier Bootstrap's grid system consists of a series of CSS classes and media queries that help you lay out your page. Specifically, the grid system helps alleviate the pain points associated with horizontal and vertical positioning of a page's contents and the structure of the page across multiple displays. With Bootstrap 4, the grid system has been completely overhauled, and a new grid tier has been added with a breakpoint of 480 px and below. We will be talking about tiers, breakpoints, and Bootstrap's grid system extensively in this book. Bye-bye GLYPHICONS Bootstrap 3 shipped with a nice collection of over 250 font icons, free of use. In an effort to make the framework more lightweight (and because font icons are considered bad practice), the GLYPHICON set is no longer available in Bootstrap 4. Bigger text – No more panels, wells, and thumbnails The default font size in Bootstrap 4 is 2 px bigger than in its predecessor, increasing from 14 px to 16 px. Furthermore, Bootstrap 4 replaced panels, wells, and thumbnails with a new concept—cards. To readers unfamiliar with the concept of wells, a well is a UI component that allows developers to highlight text content by applying an inset shadow effect to the element to which it is applied. A panel too serves to highlight information, but by applying padding and rounded borders. Cards serve the same purpose as their predecessors, but are less restrictive as they are flexible enough to support different types of content, such as images, lists, or text. They can also be customized to use footers and headers. Take a look at the following screenshot: Figure 1.5: The Bootstrap 4 card component replaces existing wells, thumbnails, and panels. New and improved form input controls Bootstrap 4 introduces new form input controls—a color chooser, a date picker, and a time picker. In addition, new classes have been introduced, improving the existing form input controls. For example, Bootstrap 4 now allows for input control sizing, as well as classes for denoting block and inline level input controls. However, one of the most anticipated new additions is Bootstrap's input validation styles, which used to require third-party libraries or a manual implementation, but are now shipped with Bootstrap 4 (see Figure 1.6 below). Take a look at the following screenshot: Figure 1.6: The new Bootstrap 4 input validation styles, indicating the successful processing of input. Last but not least, Bootstrap 4 also offers custom forms in order to provide even more cross-browser UI consistency across input elements (Figure 1.7). As noted in the Bootstrap 4 Alpha 4 documentation, the input controls are: "built on top of semantic and accessible markup, so they're solid replacements for any default form control" – Source: http://v4-alpha.getbootstrap.com/components/forms/ Take a look at the following screenshot: Figure 1.7: Custom Bootstrap input controls that replace the browser defaults in order to ensure cross-browser UI consistency. Customization The developers behind Bootstrap 4 have put specific emphasis on customization throughout the development of Bootstrap 4. As such, many new variables have been introduced that allow for the easy customization of Bootstrap. Using the $enabled-*- Sass variables, one can now enable or disable specific global CSS preferences. Setting up our project Now that we know what Bootstrap has to offer, let us set up our project: Create a new project directory named MyPhoto. This will become our project root directory. Create a blank index.html file and insert the following HTML code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> </head> <body> <div class="alert alert-success"> Hello World! </div> </body> </html> Note the three meta tags: The first tag tells the browser that the document in question is utf-8 encoded. Since Bootstrap optimizes its content for mobile devices, the subsequent meta tag is required to help with viewport scaling. The last meta tag forces the document to be rendered using the latest document rendering mode available if viewed in Internet Explorer. Open the index.html in your browser. You should see just a blank page with the words Hello World. Now it is time to include Bootstrap. At its core, Bootstrap is a glorified CSS style sheet. Within that style sheet, Bootstrap exposes very powerful features of CSS with an easy-to-use syntax. It being a style sheet, you include it in your project as you would with any other style sheet that you might develop yourself. That is, open the index.html and directly link to the style sheet. Viewport scaling The term viewport refers to the available display size to render the contents of a page. The viewport meta tag allows you to define this available size. Viewport scaling using meta tags was first introduced by Apple and, at the time of writing, is supported by all major browsers. Using the width parameter, we can define the exact width of the user's viewport. For example, <meta name="viewport" content="width=320px"> will instruct the browser to set the viewport's width to 320 px. The ability to control the viewport's width is useful when developing mobile-friendly websites; by default, mobile browsers will attempt to fit the entire page onto their viewports by zooming out as far as possible. This allows users to view and interact with websites that have not been designed to be viewed on mobile devices. However, as Bootstrap embraces a mobile-first design philosophy, a zoom out will, in fact, result in undesired side-effects. For example, breakpoints will no longer work as intended, as they now deal with the zoomed out equivalent of the page in question. This is why explicitly setting the viewport width is so important. By writing content="width=device-width, initial-scale=1, shrink-to-fit=no", we are telling the browser the following: To set the viewport's width equal to whatever the actual device's screen width is. We do not want any zoom, initially. We do not wish to shrink the content to fit the viewport. For now, we will use the Bootstrap builds hosted on Bootstrap's official Content Delivery Network (CDN). This is done by including the following HTML tag into the head of your HTML document (the head of your HTML document refers to the contents between the <head> opening tag and the </head> closing tag): <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> Bootstrap relies on jQuery, a JavaScript framework that provides a layer of abstraction in an effort to simplify the most common JavaScript operations (such as element selection and event handling). Therefore, before we include the Bootstrap JavaScript file, we must first include jQuery. Both inclusions should occur just before the </body> closing tag: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> Note that, while these scripts could, of course, be loaded at the top of the page, loading scripts at the end of the document is considered best practice to speed up page loading times and to avoid JavaScript issues preventing the page from being rendered. The reason behind this is that browsers do not download all dependencies in parallel (although a certain number of requests are made asynchronously, depending on the browser and the domain). Consequently, forcing the browser to download dependencies early on will block page rendering until these assets have been downloaded. Furthermore, ensuring that your scripts are loaded last will ensure that once you invoke Document Object Model (DOM) operations in your scripts, you can be sure that your page's elements have already been rendered. As a result, you can avoid checks that ensure the existence of given elements. What is a Content Delivery Network? The objective behind any Content Delivery Network (CDN) is to provide users with content that is highly available. This means that a CDN aims to provide you with content, without this content ever (or rarely) becoming unavailable. To this end, the content is often hosted using a large, distributed set of servers. The BootstrapCDN basically allows you to link to the Bootstrap style sheet so that you do not have to host it yourself. Save your changes and reload the index.html in your browser. The Hello World string should now contain a green background: Figure 1.5: Our "Hello World" styled using Bootstrap 4. Now that the Bootstrap framework has been included in our project, open your browser's developer console (if using Chrome on Microsoft Windows, press Ctrl + Shift + I. On Mac OS X you can press cmd + alt + I). As Bootstrap requires another third-party library, Tether for displaying popovers and tooltips, the developer console will display an error (Figure 1.6). Take a look at the following screenshot: Figure 1.6: Chrome's Developer Tools can be opened by going to View, selecting Developer and then clicking on Developer Tools. At the bottom of the page, a new view will appear. Under the Console tab, an error will indicate an unmet dependency. Tether is available via the CloudFare CDN, and consists of both a CSS file and a JavaScript file. As before, we should include the JavaScript file at the bottom of our document while we reference Tether's style sheet from inside our document head: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/js/tether.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> </body> </html> While CDNs are an important resource, there are several reasons why, at times, using a third party CDN may not be desirable: CDNs introduce an additional point of failure, as you now rely on third-party servers. The privacy and security of users may be compromised, as there is no guarantee that the CDN provider does not inject malicious code into the libraries that are being hosted. Nor can one be certain that the CDN does not attempt to track its users. Certain CDNs may be blocked by the Internet Service Providers of users in different geographical locations. Offline development will not be possible when relying on a remote CDN. You will not be able to optimize the files hosted by your CDN. This loss of control may affect your website's performance (although typically you are more often than not offered an optimized version of the library through the CDN). Instead of relying on a CDN, we could manually download the jQuery, Tether, and Bootstrap project files. We could then copy these builds into our project root and link them to the distribution files. The disadvantage of this approach is the fact that maintaining a manual collection of dependencies can quickly become very cumbersome, and next to impossible as your website grows in size and complexity. As such, we will not manually download the Bootstrap build. Instead, we will let Bower do it for us. Bower is a package management system, that is, a tool that you can use to manage your website's dependencies. It automatically downloads, organizes, and (upon command) updates your website's dependencies. To install Bower, head over to http://bower.io/. How do I install Bower? Before you can install Bower, you will need two other tools: Node.js and Git. The latter is a version control tool—in essence, it allows you to manage different versions of your software. To install Git, head over to http://git-scm.com/and select the installer appropriate for your operating system. NodeJS is a JavaScript runtime environment needed for Bower to run. To install it, simply download the installer from the official NodeJS website: https://nodejs.org/ Once you have successfully installed Git and NodeJS, you are ready to install Bower. Simply type the following command into your terminal: npm install -g bower This will install Bower for you, using the JavaScript package manager npm, which happens to be used by, and is installed with, NodeJS. Once Bower has been installed, open up your terminal, navigate to the project root folder you created earlier, and fetch the bootstrap build: bower install bootstrap#v4.0.0-alpha.4 This will create a new folder structure in our project root: bower_components bootstrap Gruntfile.js LICENSE README.md bower.json dist fonts grunt js less package.js package.json We will explain all of these various files and directories later on in this book. For now, you can safely ignore everything except for the dist directory inside bower_components/bootstrap/. Go ahead and open the dist directory. You should see three sub directories: css fonts js The name dist stands for distribution. Typically, the distribution directory contains the production-ready code that users can deploy. As its name implies, the css directory inside dist includes the ready-for-use style sheets. Likewise, the js directory contains the JavaScript files that compose Bootstrap. Lastly, the fonts directory holds the font assets that come with Bootstrap. To reference the local Bootstrap CSS file in our index.html, modify the href attribute of the link tag that points to the bootstrap.min.css: <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> Let's do the same for the Bootstrap JavaScript file: <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> Repeat this process for both jQuery and Tether. To install jQuery using Bower, use the following command: bower install jquery Just as before, a new directory will be created inside the bower_components directory: bower_components jquery AUTHORS.txt LICENSE.txt bower.json dist sizzle src Again, we are only interested in the contents of the dist directory, which, among other files, will contain the compressed jQuery build jquery.min.js. Reference this file by modifying the src attribute of the script tag that currently points to Google's jquery.min.js by replacing the URL with the path to our local copy of jQuery: <script src="bower_components/jquery/dist/jquery.min.js"></script> Last but not least, repeat the steps already outlined above for Tether: bower install tether Once the installation completes, a similar folder structure than the ones for Bootstrap and jQuery will have been created. Verify the contents of bower_components/tether/dist and replace the CDN Tether references in our document with their local equivalent. The final index.html should now look as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" href="bower_components/tether/dist/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/tether/dist/js/tether.min.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> </body> </html> Refresh the index.html in your browser to make sure that everything works. What IDE and browser should I be using when following the examples in this book? While we recommend a JetBrains IDE or Sublime Text along with Google Chrome, you are free to use whatever tools and browser you like. Our taste in IDE and browser is subjective on this matter. However, keep in mind that Bootstrap 4 does not support Internet Explorer 8 or below. As such, if you do happen to use Internet Explorer 8, you should upgrade it to the latest version. Summary Aside from introducing you to our sample project MyPhoto, this article was concerned with outlining Bootstrap 4, highlighting its features, and discussing how this new version of Bootstrap differs to the last major release (Bootstrap 3). The article provided an overview of how Bootstrap can assist developers in the layout, structuring, and styling of pages. Furthermore, we noted how Bootstrap provides access to the most important and widely used user interface controls through the form of components that can be integrated into a page with minimal effort. By providing an outline of Bootstrap, we hope that the framework's intrinsic value in assisting in the development of modern websites has become apparent to the reader. Furthermore, during the course of the wider discussion, we highlighted and explained some important concepts in web development, such as typographic units of measurement or the definition, purpose and justification of the use of Content Delivery Networks. Last but not least, we detailed how to include Bootstrap and its dependencies inside an HTML document.
Read more
  • 0
  • 0
  • 30382

article-image-intro-swift-repl-and-playgrounds
Dov Frankel
11 Sep 2016
6 min read
Save for later

Intro to the Swift REPL and Playgrounds

Dov Frankel
11 Sep 2016
6 min read
When Apple introduced Swift at WWDC (its annual Worldwide Developers Conference) in 2014, it had a few goals for the new language. Among them was being easy to learn, especially compared to other compiled languages. The following is quoted from Apple's The Swift Programming Language: Swift is friendly to new programmers. It is the first industrial-quality systems programming language that is as expressive and enjoyable as a scripting language. The REPL Swift's playgrounds embody that friendliness by modernizing the concept of a REPL (Read-Eval-Print Loop, pronounced "repple"). Introduced by the LISP language, and now a common feature of many modern scripting languages, REPLs allow you to quickly and interactively build up your code, one line at a time. A post on the Swift Blog gives an overview of the Swift REPL's features, but this is what using it looks like (to launch it, enter swift in Terminal, if you have Xcode already installed): Welcome to Apple Swift version 2.2 (swiftlang-703.0.18.1 clang-703.0.29). Type :help for assistance.   1> "Hello world" $R0: String = "Hello World"   2> let a = 1 a: Int = 1   3> let b = 2 b: Int = 2   4> a + b $R1: Int = 3   5> func aPlusB() {   6.     print("(a + b)")   7. }   8> aPlusB() 3 If you look at what's there, each line containing input you give has a small indentation. A line that starts a new command begins with the line number, followed by > (1, 2, 3, 4, 5, and 8), and each subsequent line for a given command begins with the line number, followed by  (6 and 7). These help to keep you oriented as you enter your code one line at a time. While it's certainly possible to work on more complicated code this way, it requires the programmer to keep more state about the code in his head, and it limits the output to data types that can be represented in text only. Playgrounds Playgrounds take the concept of a REPL to the next level by allowing you to see all of your code in a single editable code window, and giving you richer options to visualize your data. To get started with a Swift playground, launch Xcode, and select File > New > Playground… (⌥⇧⌘N) to create a new playground. In the following, you can see a new playground with the same code entered into the previous  REPL:   The results are shown in the gray area on the right-hand side of the window update as you type, which allows for rapid iteration. You can write standalone functions, classes, or whatever level of abstraction you wish to work in for the task at hand, removing barriers that prevent the expression of your ideas, or experimentation with the language and APIs. So, what types of goals can you accomplish? Experimentation with the language and APIs Playgrounds are an excellent place to learn about Swift, whether you're new to the language, or new to programming. You don't need to worry about main(), app bundles, simulators, or any of the other things that go into making a full-blown app. Or, if you hear about a new framework and would like to try it out, you can import it into your playground and get your hands dirty with minimal effort. Crucially, it also blows away the typical code-build-run-quit-repeat cycle that can often take up so much development time. Providing living documentation or code samples Playgrounds provide a rich experience to allow users to try out concepts they're learning, whether they're reading a technical article, or learning to use a framework. Aside from interactivity, playgrounds provide a whole other type of richness: built-in formatting via Markdown, which you can sprinkle in your playground as easily as writing comments. This allows some interesting options such as describing exercises for students to complete or providing sample code that runs without any action required of the user. Swift blog posts have included playgrounds to experiment with, as does the Swift Programming Language's A Swift Tour chapter. To author Markdown in your playground, start a comment block with /*:. Then, to see the comment rendered, click on Editor > Show Rendered Markup. There are some unique features available, such as marking page boundaries and adding fields that populate the built-in help Xcode shows. You can learn more at Apple's Markup Formatting Reference page. Designing code or UI You can also use playgrounds to interactively visualize how your code functions. There are a few ways to see what your code is doing: Individual results are shown in the gray side panel and can be Quick-Looked (including your own custom objects that implement debugQuickLookObject()). Individual or looped values can be pinned to show inline with your code. A line inside a loop will read "(10 times)," with a little circle you can toggle to pin it. For instance, you can show how a value changes with each iteration, or how a view looks: Using some custom APIs provided in the XCPlayground module, you can show live UIViews and captured values. Just import XCPlayground and set the XCPlayground.currentPage.liveView to a UIView instance, or call XCPlayground.currentPage.captureValue(someValue, withIdentifier: "My label") to fill the Assistant view. You also still have Xcode's console available to you for when debugging is best served by printing values and keeping scrolled to the bottom. As with any Swift code, you can write to the console with NSLog and print. Working with resources Playgrounds can also include resources for your code to use, such as images: A .playground file is an OS X package (a folder presented to the user as a file), which contains two subfolders: Sources and Resources. To view these folders in Xcode, show the Project Navigator in Xcode's left sidebar, the same as for a regular project. You can then drag in any resources to the Resources folder, and they'll be exposed to your playground code the same as resources are in an app bundle. You can refer to them like so: let spriteImage = UIImage(named:"sprite.png") Xcode versions starting with 7.1 even support image literals, meaning you can drag a Resources image into your source code and treat it as a UIImage instance. It's a neat idea, but makes for some strange-looking code. It's more useful for UIColors, which allow you to use a color-picker. The Swift blog post goes into more detail on how image, color, and file literals work in playgrounds: Wrap-up Hopefully this has opened your eyes to the opportunities afforded by playgrounds. They can be useful in different ways to developers of various skill and experience levels (in Swift, or in general) when learning new things or working on solving a given problem. They allow you to iterate more quickly, and visualize how your code operates more easily than other debugging methods. About the author Dov Frankel (pronounced as in "he dove swiftly into the shallow pool") works on Windows in-house software at a Connecticut hedge fund by day, and independent Mac and iOS apps by night. Find his digital comic reader, on the Mac App Store, and his iPhone app Afterglo is in the iOS App Store. He blogs when the mood strikes him, at dovfrankel.com; he's @DovFrankel on Twitter, and @abbeycode on GitHub.
Read more
  • 0
  • 0
  • 26696

article-image-simple-slack-websocket-integrations-10-lines-python
Bradley Cicenas
09 Sep 2016
3 min read
Save for later

Simple Slack Websocket Integrations in <10 lines of Python

Bradley Cicenas
09 Sep 2016
3 min read
If you use Slack, you've probably added a handful of integrations for your team from the ever-growing App Directory, and maybe even had an idea for your own Slack app. While the Slack API is featureful and robust, writing your own integration can be exceptionally easy. Through the Slack RTM (Real Time Messaging) API, you can write our own basic integrations in just a few lines of Python using the SlackSocket library. Want an accessible introduction to Python that's comprehensive enough to give you the confidence you need to dive deeper? This week, follow our Python Fundamentals course inside Mapt. It's completely free - so what have you got to lose? Structure Our integration will be structured with the following basic components: Listener Integration/bot logic Response The listener watches for one or more pre-defined "trigger" words, while the response posts the result of our intended task. Basic Integration We'll start by setting up SlackSocket with our API token: fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) By default, SlackSocket will listen for all Slack events. There are a lot of different events sent via RTM, but we're only concerned with 'message' events for our integration, so we've set an event_filter for only this type. Using the SlackSocketevents() generator, we'll read each 'message' event that comes in and can act on various conditions: for e inslack.events(): ife.event['text'] =='!hello': slack.send_msg('it works!', channel_name=e.event['channel']) If our message text matches the string '!hello', we'll respond to the source channel of the event with a given message('it works!'). At this point, we've created a complete integration that can connect to Slack as a bot user(or regular user), follow messages, and respond accordingly. Let's build something a bit more useful, like a password generator for throwaway accounts. Expanding Functionality For this integration command, we'll write a simple function to generate a random alphanumeric string 15 characters long: import random import string defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) Now we're ready to provide our random string generator to the rest of the team using the same chat logic as before, responding to the source channel with our generated password: for e inslack.events(): e.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) Altogether: import random import string fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) for e inslack.events(): ife.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) And the results:  A complete integration in 10 lines of Python. Not bad! Beyond simplicity, SlackSocket provides a great deal of flexibility for writing apps, bots, or integrations. In the case of massive Slack groups with several thousand users, messages are buffered locally to ensure that none are missed. Dropped websocket connections are automatically re-connected as well, making it an ideal base for chat client. The code for SlackSocket is available on GitHub, and as always, we welcome any contributions or feature requests! About the author Bradley Cicenas is a New York City-based infrastructure engineer with an affinity for microservices, systems design, data science, and stoops.
Read more
  • 0
  • 0
  • 5703

article-image-using-web-api-extend-your-application
Packt
08 Sep 2016
14 min read
Save for later

Using Web API to Extend Your Application

Packt
08 Sep 2016
14 min read
In this article by Shahed Chowdhuri author of book ASP.Net Core Essentials, we will work through a working sample of a web API project. During this lesson, we will cover the following: Web API Web API configuration Web API routes Consuming Web API applications (For more resources related to this topic, see here.) Understanding a web API Building web applications can be a rewarding experience. The satisfaction of reaching a broad set of potential users can trump the frustrating nights spent fine-tuning an application and fixing bugs. But some mobile users demand a more streamlined experience that only a native mobile app can provide. Mobile browsers may experience performance issues in low-bandwidth situations, where HTML5 applications can only go so far with a heavy server-side back-end. Enter web API, with its RESTful endpoints, built with mobile-friendly server-side code. The case for web APIs In order to create a piece of software, years of wisdom tell us that we should build software with users in mind. Without use cases, its features are literally useless. By designing features around user stories, it makes sense to reveal public endpoints that relate directly to user actions. As a result, you will end up with a leaner web application that works for more users. If you need more convincing, here's a recap of features and benefits: It lets you build modern lightweight web services, which are a great choice for your application, as long as you don't need SOAP It's easier to work with than any past work you may have done with ASP.NET Windows Communication Foundation (WCF) services It supports RESTful endpoints It's great for a variety of clients, both mobile and web It's unified with ASP.NET MVC and can be included with/without your web application Creating a new web API project from scratch Let's build a sample web application named Patient Records. In this application, we will create a web API from scratch to allow the following tasks: Add a new patient Edit an existing patient Delete an existing patient View a specific patient or a list of patients These four actions make up the so-called CRUD operations of our system: to Create, Read, Update or Delete patient records. Following the steps below, we will create a new project in Visual Studio 2015: Create a new web API project. Add an API controller. Add methods for CRUD operations. The preceding steps have been expanded into detailed instructions with the following screenshots: In Visual Studio 2015, click File | New | Project. You can also press Ctrl+Shift+N on your keyboard. On the left panel, locate the Web node below Visual C#, then select ASP.NET Core Web Application (.NET Core), as shown in the following screenshot: With this project template selected, type in a name for your project, for examplePatientRecordsApi, and choose a location on your computer, as shown in the following screenshot: Optionally, you may select the checkboxes on the lower right to create a directory for your solution file and/or add your new project to source control. Click OK to proceed. In the dialog that follows, select Empty from the list of the ASP.NET Core Templates, then click OK, as shown in the following screenshot: Optionally, you can check the checkbox for Microsoft Azure to host your project in the cloud. Click OK to proceed. Building your web API project In the Solution Explorer, you may observe that your References are being restored. This occurs every time you create a new project or add new references to your project that have to be restored through NuGet,as shown in the following screenshot: Follow these steps, to fix your references, and build your Web API project: Rightclickon your project, and click Add | New Folder to add a new folder, as shown in the following screenshot: Perform the preceding step three times to create new folders for your Controllers, Models, and Views,as shown in the following screenshot: Rightclick on your Controllers folder, then click Add | New Item to create a new API controller for patient records on your system, as shown in the following screenshot: In the dialog box that appears, choose Web API Controller Class from the list of options under .NET Core, as shown in the following screenshot: Name your new API controller, for examplePatientController.cs, then click Add to proceed. In your new PatientController, you will most likely have several areas highlighted with red squiggly lines due to a lack of necessary dependencies, as shown in the following screenshot. As a result, you won't be able to build your project/solution at this time: In the next section, we will learn about how to configure your web API so that it has the proper references and dependencies in its configuration files. Configuring the web API in your web application How does the web server know what to send to the browser when a specific URL is requested? The answer lies in the configuration of your web API project. Setting up dependencies In this section, we will learn how to set up your dependencies automatically using the IDE, or manually by editing your project's configuration file. To pull in the necessary dependencies, you may right-click on the using statement for Microsoft.AspNet.Mvc and select Quick Actions and Refactorings…. This can also be triggered by pressing Ctrl +. (period) on your keyboard or simply by hovering over the underlined term, as shown in the following screenshot: Visual Studio should offer you several possible options, fromwhich you can select the one that adds the package Microsoft.AspNetCore.Mvc.Corefor the namespace Microsoft.AspNetCore.Mvc. For the Controller class, add a reference for the Microsoft.AspNetCore.Mvc.ViewFeaturespackage, as shown in the following screenshot: Fig12: Adding the Microsoft.AspNetCore.Mvc.Core 1.0.0 package If you select the latest version that's available, this should update your references and remove the red squiggly lines, as shown in the following screenshot: Fig13:Updating your references and removing the red squiggly lines The precedingstep should automatically update your project.json file with the correct dependencies for theMicrosoft.AspNetCore.Mvc.Core, and Microsoft.AspNetCore.Mvc.ViewFeatures, as shown in the following screenshot: The "frameworks" section of theproject.json file identifies the type and version of the .NET Framework that your web app is using, for examplenetcoreapp1.0 for the 1.0 version of .NET Core. You will see something similar in your project, as shown in the following screenshot: Click the Build Solution button from the top menu/toolbar. Depending on how you have your shortcuts set up, you may press Ctrl+Shift+B or press F6 on your keyboard to build the solution. You should now be able to build your project/solution without errors, as shown in the following screenshot: Before running the web API project, open the Startup.cs class file, and replace the app.Run() statement/block (along with its contents) with a call to app.UseMvc()in the Configure() method. To add the Mvc to the project, add a call to the services.AddMvcCore() in the ConfigureServices() method. To allow this code to compile, add a reference to Microsoft.AspNetCore.Mvc. Parts of a web API project Let's take a closer look at the PatientController class. The auto-generated class has the following methods: public IEnumerable<string> Get() public string Get(int id) public void Post([FromBody]string value) public void Put(int id, [FromBody]string value) public void Delete(int id) The Get() method simply returns a JSON object as an enumerable string of values, while the Get(int id) method is an overridden variant that gets a particular value for a specified ID. The Post() and Put() methods can be used for creating and updating entities. Note that the Put() method takes in an ID value as the first parameter so that it knows which entity to update. Finally, we have the Delete() method, which can be used to delete an entity using the specified ID. Running the web API project You may run the web API project in a web browser that can display JSON data. If you use Google Chrome, I would suggest using the JSONView Extension (or other similar extension) to properly display JSON data. The aforementioned extension is also available on GitHub at the following URL: https://github.com/gildas-lormeau/JSONView-for-Chrome If you use Microsoft Edge, you can view the raw JSON data directly in the browser.Once your browser is ready, you can select your browser of choice from the top toolbar of Visual Studio. Click on the tiny triangle icon next to the Debug button, then select a browser, as shown in the following screenshot: In the preceding screenshot, you can see that multiple installed browsers are available, including Firefox, Google Chrome, Internet Explorer,and Edge. To choose a different browser, simply click on Browse With…, in the menu to select a different one. Now, click the Debug button (that isthe green play button) to see the web API project in action in your web browser, as shown in the following screenshot. If you don't have a web application set up, you won't be able to browse the site from the root URL: Don’t worry if you see this error, you can update the URL to include a path to your API controller, for an example seehttp://localhost:12345/api/Patient. Note that your port number may vary. Now, you should be able to see a list of views that are being spat out by your API controller, as shown in the following screenshot: Adding routes to handle anticipated URL paths Back in the days of classic ASP, application URL paths typically reflected physical file paths. This continued with ASP.NET web forms, even though the concept of custom URL routing was introduced. With ASP.NET MVC, routes were designed to cater to functionality rather than physical paths. ASP.NET web API continues this newer tradition, with the ability to set up custom routes from within your code. You can create routes for your application using fluent configuration in your startup code or with declarative attributes surrounded by square brackets. Understanding routes To understand the purpose of having routes, let's focus on the features and benefits of routes in your application. This applies to both ASP.NET MVC and ASP.NET web API: By defining routes, you can introduce predictable patterns for URL access This gives you more control over how URLs are mapped to your controllers Human-readable route paths are also SEO-friendly, which is great for Search Engine Optimization It provides some level of obscurity when it comes to revealing the underlying web technology and physical file names in your system Setting up routes Let's start with this simple class-level attribute that specifies a route for your API controller, as follows: [Route("api/[controller]")] public class PatientController : Controller { // ... } Here, we can dissect the attribute (seen in square brackets, used to affect the class below it) and its parameter to understand what's going on: The Route attribute indicates that we are going to define a route for this controller. Within the parentheses that follow, the route path is defined in double quotes. The first part of this path is thestring literal api/, which declares that the path to an API method call will begin with the term api followed by a forward slash. The rest of the path is the word controller in square brackets, which refers to the controller name. By convention, the controller's name is part of the controller's class name that precedes the term Controller. For a class PatientController, the controller name is just the word Patient. This means that all API methods for this controller can be accessed using the following syntax, where MyApplicationServer should be replaced with your own server or domain name:http://MyApplicationServer/api/Patient For method calls, you can define a route with or without parameters. The following two examples illustrate both types of route definitions: [HttpGet] public IEnumerable<string> Get() {     return new string[] { "value1", "value2" }; } In this example, the Get() method performs an action related to the HTTP verb HttpGet, which is declared in the attribute directly above the method. This identifies the default method for accessing the controller through a browser without any parameters, which means that this API method can be accessed using the following syntax: http://MyApplicationServer/api/Patient To include parameters, we can use the following syntax: [HttpGet("{id}")] public string Get(int id) {     return "value"; } Here, the HttpGet attribute is coupled with an "{id}" parameter, enclosed in curly braces within double quotes. The overridden version of the Get() method also includes an integer value named id to correspond with the expected parameter. If no parameter is specified, the value of id is equal to default(int) which is zero. This can be called without any parameters with the following syntax: http://MyApplicationServer/api/Patient/Get In order to pass parameters, you can add any integer value right after the controller name, with the following syntax: http://MyApplicationServer/api/Patient/1 This will assign the number 1 to the integer variable id. Testing routes To test the aforementioned routes, simply run the application from Visual Studio and access the specified URLs without parameters. The preceding screenshot show the results of accessing the following path: http://MyApplicationServer/api/Patient/1 Consuming a web API from a client application If a web API exposes public endpoints, but there is no client application there to consume it, does it really exist? Without getting too philosophical, let's go over the possible ways you can consume a client application. You can do any of the following: Consume the Web API using external tools Consume the Web API with a mobile app Consume the Web API with a web client Testing with external tools If you don't have a client application set up, you can use an external tool such as Fiddler. Fiddler is a free tool that is now available from Telerik, available at http://www.telerik.com/download/fiddler, as shown in the following screenshot: You can use Fiddler to inspect URLs that are being retrieved and submitted on your machine. You can also use it to trigger any URL, and change the request type (Get, Post, and others). Consuming a web API from a mobile app Since this article is primarily about the ASP.NET core web API, we won't go into detail about mobile application development. However, it's important to note that a web API can provide a backend for your mobile app projects. Mobile apps may include Windows Mobile apps, iOS apps, Android apps, and any modern app that you can build for today's smartphones and tablets. You may consult the documentation for your particular platform of choice, to determine what is needed to call a RESTful API. Consuming a web API from a web client A web client, in this case, refers to any HTML/JavaScript application that has the ability to call a RESTful API. At the least, you can build a complete client-side solution with straight JavaScript to perform the necessary actions. For a better experience, you may use jQuery and also one of many popular JavaScript frameworks. A web client can also be a part of a larger ASP.NET MVC application or a Single-Page Application (SPA). As long as your application is spitting out JavaScript that is contained in HTML pages, you can build a frontend that works with your backend web API. Summary In this article, we've taken a look at the basic structure of an ASP.NET web API project, and observed the unification of web API with MVC in an ASP.NET core. We also learned how to use a web API as our backend to provide support for various frontend applications. Resources for Article:   Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Getting Started with Spring Security [article]
Read more
  • 0
  • 0
  • 13799
article-image-getting-ready-fight
Packt
08 Sep 2016
15 min read
Save for later

Getting Ready to Fight

Packt
08 Sep 2016
15 min read
In this article by Ashley Godbold author of book Mastering Unity 2D Game Development, Second Edition, we will start out by laying the main foundation for the battle system of our game. We will create the Heads Up Display (HUD) as well as design the overall logic of the battle system. The following topics will be covered in this article: Creating a state manager to handle the logic behind a turn-based battle system Working with Mecanim in the code Exploring RPG UI Creating the game's HUD (For more resources related to this topic, see here.) Setting up our battle statemanager The most unique and important part of a turn-based battle system is the turns. Controlling the turns is incredibly important, and we will need something to handle the logic behind the actual turns for us. We'll accomplish this by creating a battle state machine. The battle state manager Starting back in our BattleScene, we need to create a state machine using all of Mecanim's handy features. Although we will still only be using a fraction of the functionality with the RPG sample, I advise you to investigate and read more about its capabilities. Navigate to AssetsAnimationControllers and create a new Animator Controller called BattleStateMachine, and then we can begin putting together the battle state machine. The following screenshot shows you the states, transitions, and properties that we will need: As shown in the preceding screenshot, we have created eight states to control the flow of a battle with two Boolean parameters to control its transition. The transitions are defined as follows: From Begin_Battle to Intro BattleReadyset to true Has Exit Timeset to false (deselected) Transition Durationset to 0 From Intro to Player_Move Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Player_Move to Player_Attack PlayerReadyset totrue Has Exit Timeset tofalse Transition Durationset to0 From Player_Attack to Change_Control PlayerReadyset tofalse Has Exit Timeset tofalse Transition Durationset to2 From Change_Control to Enemy_Attack Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Enemy_Attack to Player_Move BattleReadyset totrue Has Exit Timeset tofalse Transition Durationset to2 From Enemy_Attack to Battle_Result BattleReadyset tofalse Has Exit Timeset tofalse Transition Timeset to2 From Battle_Result to Battle_End Has Exit Timeset totrue Exit Timeset to0.9 Transition Timeset to5 Summing up, what we have built is a steady flow of battle, which can be summarized as follows: The battle begins and we show a little introductory clip to tell the player about the battle. Once the player has control, we wait for them to finish their move. We then perform the player's attack and switch the control over to the enemy AI. If there are any enemies left, they get to attack the player (if they are not too scared and have not run away). If the battle continues, we switch back to the player, otherwise we show the battle result. We show the result for five seconds (or until the player hits a key), and then finish the battle and return the player to the world together with whatever loot and experience gained. This is just a simple flow, which can be extended as much as you want, and as we continue, you will see all the points where you could expand it. With our animator state machine created, we now just need to attach it to our battle manager so that it will be available when the battle runs; the following are the ensuing steps to do this: Open up BattleScene. Select the BattleManager game object in the project Hierarchy and add an Animator component to it. Now drag the BattleStateMachine animator controller we just created into the Controller property of the Animator component. The preceding steps attached our new battle state machine to our battle engine. Now, we just need to be able to reference the BattleStateMachine Mecanim state machine from theBattleManager script. To do so, open up the BattleManager script in AssetsScripts and add the following variable to the top of the class: private Animator battleStateManager; Then, to capture the configuredAnimator in our BattleManager script, we add the following to an Awake function place before the Start function: voidAwake(){ battleStateManager=GetComponent<Animator>(); if(battleStateManager==null){ Debug.LogError("NobattleStateMachineAnimatorfound."); }   } We have to assign it this way because all the functionality to integrate the Animator Controller is built into the Animator component. We cannot simply attach the controller directly to the BattleManager script and use it. Now that it's all wired up, let's start using it. Getting to the state manager in the code Now that we have our state manager running in Mecanim, we just need to be able to access it from the code. However, at first glance, there is a barrier to achieving this. The reason being that the Mecanim system uses hashes (integer ID keys for objects) not strings to identify states within its engine (still not clear why, but for performance reasons probably). To access the states in Mecanim, Unity provides a hashing algorithm to help you, which is fine for one-off checks but a bit of an overhead when you need per-frame access. You can check to see if a state's name is a specific string using the following: GetCurrentAnimatorStateInfo(0).IsName("Thing you're checking") But there is no way to store the names of the current state, to a variable. A simple solution to this is to generate and cache all the state hashes when we start and then use the cache to talk to the Mecanim engine. First, let's remove the placeholder code, for the old enum state machine.So, remove the following code from the top of the BattleManager script: enum BattlePhase {   PlayerAttack,   EnemyAttack } private BattlePhase phase; Also, remove the following line from the Start method: phase = BattlePhase.PlayerAttack; There is still a reference in the Update method for our buttons, but we will update that shortly; feel free to comment it out now if you wish, but don't delete it. Now, to begin working with our new state machine, we need a replica of the available states we have defined in our Mecanim state machine. For this, we just need an enumeration using the same names (you can create this either as a new C# script or simply place it in the BattleManager class) as follows: publicenumBattleState { Begin_Battle, Intro, Player_Move, Player_Attack, Change_Control, Enemy_Attack, Battle_Result, Battle_End } It may seem strange to have a duplicate of your states in the state machine and in the code; however, at the time of writing, it is necessary. Mecanim does not expose the names of the states outside of the engine other than through using hashes. You can either use this approach and make it dynamic, or extract the state hashes and store them in a dictionary for use. Mecanim makes the managing of state machines very simple under the hood and is extremely powerful, much better than trawling through code every time you want to update the state machine. Next, we need a location to cache the hashes the state machine needs and a property to keep the current state so that we don't constantly query the engine for a hash. So, add a new using statement to the beginning of the BattleManager class as follows: using System.Collections; using System.Collections.Generic; using UnityEngine; Then, add the following variables to the top of the BattleManager class: private Dictionary<int, BattleState> battleStateHash = new Dictionary<int, BattleState>(); private BattleState currentBattleState; Finally, we just need to integrate the animator state machine we have created. So, create a new GetAnimationStates method in the BattleManager class as follows: void GetAnimationStates() {   foreach (BattleState state in (BattleState[])System.Enum.     GetValues(typeof(BattleState)))   {     battleStateHash.Add(Animator.StringToHash       (state.ToString()), state);   } } This simply generates a hash for the corresponding animation state in Mecanim and stores the resultant hashes in a dictionary that we can use without having to calculate them at runtime when we need to talk to the state machine. Sadly, there is no way at runtime to gather the information from Mecanim as this information is only available in the editor. You could gather the hashes from the animator and store them in a file to avoid this, but it won't save you much. To complete this, we just need to call the new method in the Start function of the BattleManager script by adding the following: GetAnimationStates(); Now that we have our states, we can use them in our running game to control both the logic that is applied and the GUI elements that are drawn to the screen. Now add the Update function to the BattleManager class as follows: voidUpdate() {   currentBattleState = battleStateHash[battleStateManager.     GetCurrentAnimatorStateInfo(0).shortNameHash];     switch (currentBattleState)   {     case BattleState.Intro:       break;     case BattleState.Player_Move:       break;     case BattleState.Player_Attack:       break;     case BattleState.Change_Control:       break;     case BattleState.Enemy_Attack:       break;     case BattleState.Battle_Result:       break;     case BattleState.Battle_End:       break;     default:       break;   } } The preceding code gets the current state from the animator state machine once per frame and then sets up a choice (switch statement) for what can happen based on the current state. (Remember, it is the state machine that decides which state follows which in the Mecanim engine, not nasty nested if statements everywhere in code.) Now we are going to update the functionality that turns our GUI button on and off. Update the line of code in the Update method we wrote as follows: if(phase==BattlePhase.PlayerAttack){ so that it now reads: if(currentBattleState==BattleState.Player_Move){ This will make it so that the buttons are now only visible when it is time for the player to perform his/her move. With these in place, we are ready to start adding in some battle logic. Starting the battle As it stands, the state machine is waiting at the Begin_Battle state for us to kick things off. Obviously, we want to do this when we are ready and all the pieces on the board are in place. When the current Battle scene we added, starts, we load up the player and randomly spawn in a number of enemies into the fray using a co-routine function called SpawnEnemies. So, only when all the dragons are ready and waiting to be chopped down do we want to kick things off. To tell the state machine to start the battle, we simple add the following line just after the end of the forloop in the SpawnEnemies IEnumerator co-routine function: battleStateManager.SetBool("BattleReady", true); Now when everything is in place, the battle will finally begin. Introductory animation When the battle starts, we are going to display a little battle introductory image that states who the player is going to be fighting against. We'll have it slide into the scene and then slide out. You can do all sorts of interesting stuff with this introductory animation, like animating the individual images, but I'll leave that up to you to play with. Can't have all the fun now, can I? Start by creating a new Canvas and renaming it IntroCanvas so that we can distinguish it from the canvas that will hold our buttons. At this point, since we are adding a second canvas into the scene, we should probably rename ours to something that is easier for you to identify. It's a matter of preference, but I like to use different canvases for different UI elements. For example, one for the HUD, one for pause menus, one for animations, and so on. You can put them all on a single canvas and use Panels and CanvasGroup components to distinguish between them; it's really up to you. As a child of the new IntroCanvas, create a Panel with the properties shown in the following screenshot. Notice that the Imageoblect's Color property is set to black with the alpha set to about half: Now add as a child of the Panel two UI Images and a UI Text. Name the first image PlayerImage and set its properties as shown in the following screenshot. Be sure to set Preserve Aspect to true: Name the second image EnemyImage and set the properties as shown in the following screenshot: For the text, set the properties as shown in the following screenshot: Your Panel should now appear as mine did in the image at the beginning of this section. Now let's give this Panel its animation. With the Panel selected, select the Animation tab. Now hit the Create button. Save the animation as IntroSlideAnimation in the Assets/Animation/Clipsfolder. At the 0:00 frame, set the Panel's X position to 600, as shown in the following screenshot: Now, at the 0:45 frame, set the Panel's X position to 0. Place the playhead at the 1:20 frame and set the Panel's X position to 0, there as well, by selecting Add Key, as shown in the following screenshot: Create the last frame at 2:00 by setting the Panel's X position to -600. When the Panel slides in, it does this annoying bounce thing instead of staying put. We need to fix this by adjusting the animation curve. Select the Curves tab: When you select the Curves tab, you should see something like the following: The reason for the bounce is the wiggle that occurs between the two center keyframes. To fix this, right-click on the two center points on the curve represented by red dots and select Flat,as shown in the following screenshot: After you do so, the curve should be constant (flat) in the center, as shown in the following screenshot: The last thing we need to do to connect this to our BattleStateMananger isto adjust the properties of the Panel's Animator. With the Panel selected, select the Animator tab. You should see something like the following: Right now, the animation immediately plays when the scene is entered. However, since we want this to tie in with our BattleStateManager and only begin playing in the Intro state, we do not want this to be the default animation. Create an empty state within the Animator and set it as the default state. Name this state OutOfFrame. Now make a Trigger Parameter called Intro. Set the transition between the two states so that it has the following properties: The last things we want to do before we move on is make it so this animation does not loop, rename this new Animator, and place our Animator in the correct subfolder. In the project view, select IntroSlideAnimation from the Assets/Animation/Clips folder and deselect Loop Time. Rename the Panel Animator to VsAnimator and move it to the Assets/Animation/Controllersfolder. Currently, the Panel is appearing right in the middle of the screen at all times, so go ahead and set the Panel's X Position to600, to get it out of the way. Now we can access this in our BattleStateManager script. Currently, the state machine pauses at the Intro state for a few seconds; let's have our Panel animation pop in. Add the following variable declarations to our BattleStateManager script: public GameObjectintroPanel; Animator introPanelAnim; And add the following to the Awake function: introPanel Anim=introPanel.GetComponent<Animator>(); Now add the following to the case line of the Intro state in the Updatefunction: case BattleState.Intro: introPanelAnim.SetTrigger("Intro"); break; For this to work, we have to drag and drop the Panel into the Intro Panel slot in the BattleManager Inspector. As the battle is now in progress and the control is being passed to the player, we need some interaction from the user. Currently, the player can run away, but that's not at all interesting. We want our player to be able to fight! So, let's design a graphic user interface that will allow her to attack those adorable, but super mean, dragons. Summary Getting the battle right based on the style of your game is very important as it is where the player will spend the majority of their time. Keep the player engaged and try to make each battle different in some way, as receptiveness is a tricky problem to solve and you don't want to bore the player. Think about different attacks your player can perform that possibly strengthen as the player strengthens. In this article, you covered the following: Setting up the logic of our turn-based battle system Working with state machines in the code Different RPG UI overlays Setting up the HUD of our game so that our player can do more than just run away Resources for Article: Further resources on this subject: Customizing an Avatar in Flash Multiplayer Virtual Worlds [article] Looking Good – The Graphical Interface [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 23415

article-image-customizing-xtext-components
Packt
08 Sep 2016
30 min read
Save for later

Customizing Xtext Components

Packt
08 Sep 2016
30 min read
In this article written by Lorenzo Bettini, author of the book Implementing Domain Specific Languages Using Xtend and Xtext, Second Edition, the author describes the main mechanism for customizing Xtext components—Google Guice, a Dependency Injection framework. With Google Guice, we can easily and consistently inject custom implementations of specific components into Xtext. In the first section, we will briefly show some Java examples that use Google Guice. Then, we will show how Xtext uses this dependency injection framework. In particular, you will learn how to customize both the runtime and the UI aspects. This article will cover the following topics: An introduction to Google Guice dependency injection framework How Xtext uses Google Guice How to customize several aspects of an Xtext DSL (For more resources related to this topic, see here.) Dependency injection The Dependency Injection pattern (see the article Fowler, 2004) allows you to inject implementation objects into a class hierarchy in a consistent way. This is useful when classes delegate specific tasks to objects referenced in fields. These fields have abstract types (that is, interfaces or abstract classes) so that the dependency on actual implementation classes is removed. In this first section, we will briefly show some Java examples that use Google Guice. Of course, all the injection principles naturally apply to Xtend as well. If you want to try the following examples yourself, you need to create a new Plug-in Project, for example, org.example.guice and add com.google.inject and javax.inject as dependencies in the MANIFEST.MF. Let's consider a possible scenario: a Service class that abstracts from the actual implementation of a Processor class and a Logger class. The following is a possible implementation: public class Service {   private Logger logger;   private Processor processor;     public void execute(String command) { logger.log("executing " + command); processor.process(command); logger.log("executed " + command);   } }   public class Logger {   public void log(String message) { out.println("LOG: " + message);   } }   public interface Processor {   public void process(Object o); }   public classProcessorImplimplements Processor {   private Logger logger;     public void process(Object o) { logger.log("processing"); out.println("processing " + o + "...");   } } These classes correctly abstract from the implementation details, but the problem of initializing the fields correctly still persists. If we initialize the fields in the constructor, then the user still needs to hardcode the actual implementation classnames. Also, note that Logger is used in two independent classes; thus, if we have a custom logger, we must make sure that all the instances use the correct one. These issues can be dealt with using dependency injection. With dependency injection, hardcoded dependencies will be removed. Moreover, we will be able to easily and consistently switch the implementation classes throughout the code. Although the same goal can be achieved manually by implementing factory method or abstract factory patterns (see the book Gamma et al, 1995), with dependency injection framework it is easier to keep the desired consistency and the programmer needs to write less code. Xtext uses the dependency injection framework Google Guice, https://github.com/google/guice. We refer to the Google Guice documentation for all the features provided by this framework. In this section, we just briefly describe its main features. You annotate the fields you want Guice to inject with the @Inject annotation (com.google.inject.Inject): public class Service {   @Inject private Logger logger;   @Inject private Processor processor;     public void execute(String command) { logger.log("executing " + command); processor.process(command); logger.log("executed " + command);   } }   public class ProcessorImpl implements Processor {   @Inject private Logger logger;     public void process(Object o) { logger.log("processing"); out.println("processing " + o + "...");   } } The mapping from injection requests to instances is specified in a Guice Module, a class that is derived from com.google.inject.AbstractModule. The method configure is implemented to specify the bindings using a simple and intuitive API. You only need to specify the bindings for interfaces, abstract classes, and for custom classes. This means that you do not need to specify a binding for Logger since it is a concrete class. On the contrary, you need to specify a binding for the interface Processor. The following is an example of a Guice module for our scenario: public class StandardModule extends AbstractModule {   @Override   protected void configure() {     bind(Processor.class).to(ProcessorImpl.class);   } } You create an Injector using the static method Guice.createInjector by passing a module. You then use the injector to create instances: Injector injector = Guice.createInjector(newStandardModule()); Service service = injector.getInstance(Service.class); service.execute("First command"); The initialization of injected fields will be done automatically by Google Guice. It is worth noting that the framework is also able to initialize (inject) private fields, like in our example. Instances of classes that use dependency injection must be created only through an injector. Creating instances with new will not trigger injection, thus all the fields annotated with @Inject will be null. When implementing a DSL with Xtext you will never have to create a new injector manually. In fact, Xtext generates utility classes to easily obtain an injector, for example, when testing your DSL with JUnit. We also refer to the article Köhnlein, 2012 for more details. The example shown in this section only aims at presenting the main features of Google Guice. If we need a different configuration of the bindings, all we need to do is define another module. For example, let's assume that we defined additional derived implementations for logging and processing. Here is an example where Logger and Processor are bound to custom implementations: public class CustomModule extends AbstractModule {   @Override   protected void configure() {     bind(Logger.class).to(CustomLogger.class);     bind(Processor.class).to(AdvancedProcessor.class);   } } Creating instances with an injector obtained using this module will ensure that the right classes are used consistently. For example, the CustomLogger class will be used both by Service and Processor. You can create instances from different injectors in the same application, for example: executeService(Guice.createInjector(newStandardModule())); executeService(Guice.createInjector(newCustomModule()));   voidexecuteService(Injector injector) {   Service service = injector.getInstance(Service.class); service.execute("First command"); service.execute("Second command"); } It is possible to request injection in many different ways, such as injection of parameters to constructors, using named instances, specification of default implementation of an interface, setter methods, and much more. In this book, we will mainly use injected fields. Injected fields are instantiated only once when the class is instantiated. Each injection will create a new instance, unless the type to inject is marked as @Singleton(com.google.inject.Singleton). The annotation @Singleton indicates that only one instance per injector will be used. We will see an example of Singleton injection. If you want to decide when you need an element to be instantiated from within method bodies, you can use a provider. Instead of injecting an instance of the wanted type C, you inject a com.google.inject.Provider<C> instance, which has a get method that produces an instance of C. For example: public class Logger {   @Inject   private Provider<Utility>utilityProvider;     public void log(String message) { out.println("LOG: " + message + " - " + utilityProvider.get().m());    } } Each time we create a new instance of Utility using the injected Provider class. Even in this case, if the type of the created instance is annotated with @Singleton, then the same instance will always be returned for the same injector. The nice thing is that to inject a custom implementation of Utility, you do not need to provide a custom Provider: you just bind the Utility class in the Guice module and everything will work as expected: public classCustomModule extends AbstractModule {   @Override   protected void configure() {     bind(Logger.class).to(CustomLogger.class);     bind(Processor.class).to(AdvancedProcessor.class);     bind(Utility.class).to(CustomUtility.class);   } }   It is crucial to keep in mind that once classes rely on injection, their instances must be created only through an injector; otherwise, all the injected elements will be null. In general, once dependency injection is used in a framework, all classes of the framework must rely on injection. Google Guice in Xtext All Xtext components rely on Google Guice dependency injection, even the classes that Xtext generates for your DSL. This means that in your classes, if you need to use a class from Xtext, you just have to declare a field of such type with the @Inject annotation. The injection mechanism allows a DSL developer to customize basically every component of the Xtext framework. This boils down to another property of dependency injection, which, in fact, inverts dependencies. The Xtext runtime can use your classes without having a dependency to its implementer. Instead, the implementer has a dependency on the interface defined by the Xtext runtime. For this reason, dependency injection is said to implement inversion of control and the dependency inversion principle. When running the MWE2 workflow, Xtext generates both a fully configured module and an empty module that inherits from the generated one. This allows you to override generated or default bindings. Customizations are added to the empty stub module. The generated module should not be touched. Xtext generates one runtime module that defines the non-user interface-related parts of the configuration and one specific for usage in the Eclipse IDE. Guice provides a mechanism for composing modules that is used by Xtext—the module in the UI project uses the module in the runtime project and overrides some bindings. Let's consider the Entities DSL example. You can find in the src directory of the runtime project the Xtend class EntitiesRuntimeModule, which inherits from AbstractEntitiesRuntimeModule in the src-gen directory. Similarly, in the UI project, you can find in the src directory the Xtend class EntitiesUiModule, which inherits from AbstractEntitiesUiModule in the src-gen directory. The Guice modules in src-gen are already configured with the bindings for the stub classes generated during the MWE2 workflow. Thus, if you want to customize an aspect using a stub class, then you do not have to specify any specific binding. The generated stub classes concern typical aspects that the programmer usually wants to customize, for example, validation and generation in the runtime project, and labels, and outline in the UI project (as we will see in the next sections). If you need to customize an aspect which is not covered by any of the generated stub classes, then you will need to write a class yourself and then specify the binding for your class in the Guice module in the src folder. We will see an example of this scenario in the Other customizations section. Bindings in these Guice module classes can be specified as we saw in the previous section, by implementing the configure method. However, Xtext provides an enhanced API for defining bindings; Xtext reflectively searches for methods with a specific signature in order to find Guice bindings. Thus, assuming you want to bind a BaseClass class to your derived CustomClass, you can simply define a method in your module with a specific signature, as follows: def Class<? extendsBaseClass>bindBaseClass() {   returnCustomClass } Remember that in Xtend, you must explicitly specify that you are overriding a method of the base class; thus, in case the bind method is already defined in the  base class, you need to use override instead of def. These methods are invoked reflectively, thus their signature must follow the expected convention. We refer to the official Xtext documentation for the complete description of the module API. Typically, the binding methods that you will see in this book will have the preceding shape, in particular, the name of the method must start with bind followed by the name of the class or interface we want to provide a binding for. It is important to understand that these bind methods do not necessarily have to override a method in the module base class. You can also make your own classes, which are not related to Xtext framework classes at all, participants of this injection mechanism, as long as you follow the preceding convention on method signatures. In the rest of this article, we will show examples of customizations of both IDE and runtime concepts. For most of these customizations, we will modify the corresponding Xtend stub class that Xtext generated when running the MWE2 workflow. As hinted before, in these cases, we will not need to write a custom Guice binding. We will also show an example of a customization, which does not have an automatically generated stub class. Xtext uses injection to inject services and not to inject state (apart from EMF Singleton registries). Thus, the things that are injected are interfaces consisting of functions that take state as arguments (for example, the document, the resource, and so on.). This leads to a service-oriented architecture, which is different from an object-oriented architecture where state is encapsulated with operations. An advantage of this approach is that there are far less problems with synchronization of multiple threads. Customizations of IDE concepts In this section, we show typical concepts of the IDE for your DSL that you may want to customize. Xtext shows its usability in this context as well, since, as you will see, it reduces the customization effort. Labels Xtext UI classes make use of an ILabelProvider interface to obtain textual labels and icons through its methods getText and getImage, respectively. ILabelProvider is a standard component of Eclipse JFace-based viewers. You can see the label provider in action in the Outline view and in content assist proposal popups (as well as in various other places). Xtext provides a default implementation of a label provider for all DSLs, which does its best to produce a sensible representation of the EMF model objects using the name feature, if it is found in the corresponding object class, and a default image. You can see that in the Outline view when editing an entities file, refer to the following screenshot: However, you surely want to customize the representation of some elements of your DSL. The label provider Xtend stub class for your DSL can be found in the UI plug-in project in the subpackageui.labeling. This stub class extends the base class DefaultEObjectLabelProvider. In the Entities DSL, the class is called EntitiesLabelProvider. This class employs a Polymorphic Dispatcher mechanism, which is also used in many other places in Xtext. Thus, instead of implementing the getText and getImage methods, you can simply define several versions of methods text and image taking as parameter an EObject object of the type you want to provide a representation for. Xtext will then search for such methods according to the runtime type of the elements to represent. For example, for our Entities DSL, we can change the textual representation of attributes in order to show their names and a better representation of types (for example, name : type). We then define a method text taking Attribute as a parameter and returning a string: classEntitiesLabelProviderextends ... {     @Inject extensionTypeRepresentation   def text(Attribute a) { a.name +       if (a.type != null)          " : " + a.type.representation       else ""   } } To get a representation of the AttributeType element, we use an injected extension, TypeRepresentation, in particular its method representation: classTypeRepresentation { def representation(AttributeType t) { valelementType = t.elementType valelementTypeRepr =       switch (elementType) { BasicType : elementType.typeName EntityType : elementType?.entity.name       } elementTypeRepr + if (t.array) "[]"else""   } } Remember that the label provider is used, for example, for the Outline view, which is refreshed when the editor contents change, and its contents might contain errors. Thus, you must be ready to deal with an incomplete model, and some features might still be null. That is why you should always check that the features are not null before accessing them. Note that we inject an extension field of type TypeRepresentation instead of creating an instance with new in the field declaration. Although it is not necessary to use injection for this class, we decided to rely on that because in the future we might want to be able to provide a different implementation for that class. Another point for using injection instead of new is that the other class may rely on injection in the future. Using injection leaves the door open for future and unanticipated customizations. The Outline view now shows as in the following screenshot: We can further enrich the labels for entities and attributes using images for them. To do this, we create a directory in the org.example.entities.ui project where we place the image files of the icons we want to use. In order to benefit from Xtext's default handling of images, we call the directory icons, and we place two gif images there, Entity.gif and Attribute.gif (for entities and attributes, respectively). You fill find the icon files in the accompanying source code in the org.example.entities.ui/icons folder. We then define two image methods in EntitiesLabelProvider where we only need to return the name of the image files and Xtext will do the rest for us: class EntitiesLabelProvider extends DefaultEObjectLabelProvider {   ... as before def image(Entity e) { "Entity.gif" }   def image(Attribute a) { "Attribute.gif" } } You can see the result by relaunching Eclipse, as seen in the following screenshot: Now, the entities and attributes labels look nicer. If you plan to export the plugins for your DSL so that others can install them in their Eclipse, you must make sure that the icons directory is added to the build.properties file, otherwise that directory will not be exported. The bin.includes section of the build.properties file of your UI plugin should look like the following: bin.includes = META-INF/,                ., plugin.xml,                icons/ The Outline view The default Outline view comes with nice features. In particular, it provides toolbar buttons to keep the Outline view selection synchronized with the element currently selected in the editor. Moreover, it provides a button to sort the elements of the tree alphabetically. By default, the tree structure is built using the containment relations of the metamodel of the DSL. This strategy is not optimal in some cases. For example, an Attribute definition also contains the AttributeType element, which is a structured definition with children (for example, elementType, array, and length). This is reflected in the Outline view (refer to the previous screenshot) if you expand the Attribute elements. This shows unnecessary elements, such as BasicType names, which are now redundant since they are shown in the label of the attribute, and additional elements which are not representable with a name, such as the array feature. We can influence the structure of the Outline tree using the generated stub class EntitiesOutlineTreeProvider in the src folder org.example.entities.ui.outline. Also in this class, customizations are specified in a declarative way using the polymorphic dispatch mechanism. The official documentation, https://www.eclipse.org/Xtext/documentation/, details all the features that can be customized. In our example, we just want to make sure that the nodes for attributes are leaf nodes, that is, they cannot be further expanded and they have no children. In order to achieve this, we just need to define a method named _isLeaf (note the underscore) with a parameter of the type of the element, returning true. Thus, in our case we write the following code: classEntitiesOutlineTreeProviderextends DefaultOutlineTreeProvider { def _isLeaf(Attribute a) { true } } Let's relaunch Eclipse, and now see that the attribute nodes do not expose children anymore. Besides defining leaf nodes, you can also specify the children in the tree for a specific node by defining a _createChildren method taking as parameters the type of outline node and the type of the model element. This can be useful to define the actual root elements of the Outline tree. By default, the tree is rooted with a single node for the source file. In this example, it might be better to have a tree with many root nodes, each one representing an entity. The root of the Outline tree is always represented by a node of type DefaultRootNode. The root node is actually not visible, it is just the container of all nodes that will be displayed as roots in the tree. Thus, we define the following method (our Entities model is rooted by a Model element): public classEntitiesOutlineTreeProvider ... {   ... as before def void _createChildren(DocumentRootNodeoutlineNode,                            Model model) { model.entities.forEach[       entity | createNode(outlineNode, entity);    ]   } } This way, when the Outline tree is built, we create a root node for each entity instead of having a single root for the source file. The createNode method is part of the Xtext base class. The result can be seen in the following screenshot: Customizing other aspects We will show how to customize the content assistant. There is no need to do this for the simple Entities DSL since the default implementation already does a fine job. Custom formatting An editor for a DSL should provide a mechanism for rearranging the text of the program in order to improve its readability, without changing its semantics. For example, nested regions inside blocks should be indented, and the user should be able to achieve that with a menu. Besides that, implementing a custom formatter has also other benefits, since the formatter is automatically used by Xtext when you change the EMF model of the AST. If you tried to apply the quickfixes, you might have noticed that after the EMF model has changed, the editor immediately reflects this change. However, the resulting textual representation is not well formatted, especially for the quickfix that adds the missing referred entity. In fact, the EMF model representing the AST does not contain any information about the textual representation, that is, all white space characters are not part of the EMF model (after all, the AST is an abstraction of the actual program). Xtext keeps track of such information in another in-memory model called the nodemodel. The node model carries the syntactical information, that is, offset and length in the textual document. However, when we manually change the EMF model, we do not provide any formatting directives, and Xtext uses the default formatter to get a textual representation of the modified or added model parts. Xtext already generates the menu for formatting your DSL source programs in the Eclipse editor. As it is standard in Eclipse editors (for example, the JDT editor), you can access the Format menu from the context menu of the editor or using the Ctrl + Shift + F key combination. The default formatter is OneWhitespaceFormatter and you can test this in the Entities DSL editor; this formatter simply separates all tokens of your program with a space. Typically, you will want to change this default behavior. If you provide a custom formatter, this will be used not only when the Format menu is invoked, but also when Xtext needs to update the editor contents after a manual modification of the AST model, for example, a quickfix performing a semantic modification. The easiest way to customize the formatting is to have the Xtext generator create a stub class. To achieve this, you need to add the following formatter specification in the StandardLanguage block in the MWE2 workflow file, requesting to generate an Xtend stub class: language = StandardLanguage {     name = "org.example.entities.Entities" fileExtensions = "entities"     ...     formatter = { generateStub = true generateXtendStub = true     } } If you now run the workflow, you will find the formatter Xtend stub class in the main plugin project in the formatting2 package. For our Entities DSL, the class is org.example.entities.formatting2.EntitiesFormatter. This stub class extends the Xtext class AbstractFormatter2. Note that the name of the package ends with 2. That is because Xtext recently completely changed the customization of the formatter to enhance its mechanisms. The old formatter is still available, though deprecated, so the new formatter classes have the 2 in the package in order not to be mixed with the old formatter classes. In the generated stub class, you will get lots of warnings of the shape Discouraged access: the type AbstractFormatter2 is not accessible due to restriction on required project org.example.entities. That is because the new formatting API is still provisional, and it may change in future releases in a non-backward compatible way. Once you are aware of that, you can decide to ignore the warnings. In order to make the warnings disappear from the Eclipse project, you configure the specific project settings to ignore such warnings, as shown in the following screenshot: The Xtend stub class already implements a few dispatch methods, taking as parameters the AST element to format and an IFormattableDocument object. The latter is used to specify the formatting requests. A formatting request will result in a textual replacement in the program text. Since it is an extension parameter, you can use its methods as extension methods (for more details on extension methods. The IFormattableDocument interface provides a Java API for specifying formatting requests. Xtend features such as extension methods and lambdas will allow you to specify formatting request in an easy and readable way. The typical formatting requests are line wraps, indentations, space addition and removal, and so on. These will be applied on the textual regions of AST elements. As we will show in this section, the textual regions can be specified by the EObject of AST or by its keywords and features. For our Entities DSL, we decide to perform formatting as follows: Insert two newlines after each entity so that entities will be separated by an empty line; after the last entity, we want a single empty line. Indent attributes between entities curly brackets. Insert one line-wrap after each attribute declaration. Make sure that entity name, super entity, and the extends keyword are surrounded by a single space. Remove possible white spaces around the ; of an attribute declaration. To achieve the empty lines among entities, we modify the stub method for the Entities Model element: def dispatch void format(Model model,                                 extensionIFormattableDocument document) { vallastEntity = model.entities.last   for (entity : model.entities) { entity.format     if (entity === lastEntity) entity.append[setNewLines(1)]     else entity.append[setNewLines(2)]   } } We append two newlines after each entity. This way, each entity will be separated by an empty line, since each entity, except for the first one, will start on the second added newline. We append only one newline after the last entity. Now start a new Eclipse instance and manually test the formatter with some entities, by pressing Ctrl + Shift + F. We modify the format stub method for the Entity elements. In order to separate each attribute, we follow a logic similar to the previous format method. For the sake of the example, we use a different version of setNewLines, that is setNewLines(intminNewLines, intdefaultNewLines, intmaxNewLines), whose signature is self-explanatory: for (attribute : entity.attributes) { attribute.append[setNewLines(1, 1, 2)] } Up to now, we referred to a textual region of the AST by specifying the EObject. Now, we need to specify the textual regions of keywords and features of a given AST element. In order to specify that the "extends" keyword is surrounded by one single space we write the following: entity.regionFor.keyword("extends").surround[oneSpace] We also want to have no space around the terminating semicolon of attributes, so we write the following: attribute.regionFor.keyword(";").surround[noSpace] In order to specify that the the entity's name and the super entity are surrounded by one single space we write the following: entity.regionFor.feature(ENTITY__NAME).surround[oneSpace] entity.regionFor.feature(ENTITY__SUPER_TYPE).surround[oneSpace] After having imported statically all the EntitiesPackage.Literals members, as follows: import staticorg.example.entities.entities.EntitiesPackage.Literals.* Finally, we want to handle the indentation inside the curly brackets of an entity and to have a newline after the opening curly bracket. This is achieved with the following lines: val open = entity.regionFor.keyword("{") val close = entity.regionFor.keyword("}") open.append[newLine] interior(open, close)[indent] Summarizing, the format method for an Entity is the following one: def dispatch void format(Entity entity,                           extensionIFormattableDocument document) { entity.regionFor.keyword("extends").surround[oneSpace] entity.regionFor.feature(ENTITY__NAME).surround[oneSpace] entity.regionFor.feature(ENTITY__SUPER_TYPE).surround[oneSpace]   val open = entity.regionFor.keyword("{") val close = entity.regionFor.keyword("}") open.append[newLine]   interior(open, close)[indent]     for (attribute : entity.attributes) { attribute.regionFor.keyword(";").surround[noSpace] attribute.append[setNewLines(1, 1, 2)]   } } Now, start a new Eclipse instance and manually test the formatter with some attributes and entities, by pressing Ctrl + Shift + F. In the generated Xtend stub class, you also find an injected extension for accessing programmatically the elements of your grammar. In this DSL it is the following: @Inject extensionEntitiesGrammarAccess For example, to specify the left curly bracket of an entity, we could have written this alternative line: val open = entity.regionFor.keyword(entityAccess.leftCurlyBracketKeyword_3) Similarly, to specify the terminating semicolon of an attribute, we could have written this alternative line: attribute.regionFor.keyword(attributeAccess.semicolonKeyword_2)   .surround[noSpace] Eclipse content assist will help you in selecting the right method to use. Note that the method names are suffixed with numbers that relate to the position of the keyword in the grammar's rule. Changing a rule in the DSL's grammar with additional elements or by removing some parts will make such method invocations invalid since the method names will change. On the other hand, if you change a keyword in your grammar, for example, you use square brackets instead of curly brackets, then referring to keywords with string literals as we did in the original implementation of the format methods will issue no compilation errors, but the formatting will not work anymore as expected. Thus, you need to choose your preferred strategy according to the likeliness of your DSL's grammar evolution. You can also try and apply our quickfixes for missing entities and you will see that the added entity is nicely formatted, according to the logic we implemented. What is left to be done is to format the attribute type nicely, including the array specification. This is left as an exercise. The EntitiesFormatter you find in the accompanying sources of this example DSL contains also this formatting logic for attribute types. You should specify formatting requests avoiding conflicting requests on the same textual region. In case of conflicts, the formatter will throw an exception with the details of the conflict. Other customizations All the customizations you have seen so far were based on modification of a generated stub class with accompanying generated Guice bindings in the module under the src-gen directory. However, since Xtext relies on injection everywhere, it is possible to inject a custom implementation for any mechanism, even if no stub class has been generated. If you installed Xtext SDK in your Eclipse, the sources of Xtext are available for you to inspect. You should learn to inspect these sources by navigating to them and see what gets injected and how it is used. Then, you are ready to provide a custom implementation and inject it. You can use the Eclipse Navigate menu. In particular, to quickly open a Java file (even from a library if it comes with sources), use Ctrl + Shift + T (Open Type…). This works both for Java classes and Xtend classes. If you want to quickly open another source file (for example, an Xtext grammar file) use Ctrl + Shift + R (Open Resource…). Both dialogs have a text field where, if you start typing, the available elements soon show up. Eclipse supports CamelCase everywhere, so you can just type the capital letters of a compound name to quickly get to the desired element. For example, to open the EntitiesRuntimeModule Java class, use the Open Type… menu and just digit ERM to see the filtered results. As an example, we show how to customize the output directory where the generated files will be stored (the default is src-gen). Of course, this output directory can be modified by the user using the Properties dialog that Xtext generated for your DSL, but we want to customize the default output directory for Entities DSL so that it becomes entities-gen. The default output directory is retrieved internally by Xtext using an injected IOutputConfigurationProvider instance. If you take a look at this class (see the preceding tip), you will see the following: importcom.google.inject.ImplementedBy; @ImplementedBy(OutputConfigurationProvider.class) public interfaceIOutputConfigurationProvider {   Set<OutputConfiguration>getOutputConfigurations();   ...  The @ImplementedByGuice annotation tells the injection mechanism the default implementation of the interface. Thus, what we need to do is create a subclass of the default implementation (that is, OutputConfigurationProvider) and provide a custom binding for the IOutputConfigurationProvider interface. The method we need to override is getOutputConfigurations; if we take a look at its default implementation, we see the following: public Set<OutputConfiguration>getOutputConfigurations() { OutputConfigurationdefaultOutput = new OutputConfiguration(IFileSystemAccess.DEFAULT_OUTPUT); defaultOutput.setDescription("Output Folder"); defaultOutput.setOutputDirectory("./src-gen"); defaultOutput.setOverrideExistingResources(true); defaultOutput.setCreateOutputDirectory(true); defaultOutput.setCleanUpDerivedResources(true); defaultOutput.setSetDerivedProperty(true); defaultOutput.setKeepLocalHistory(true);   returnnewHashSet(defaultOutput); } Of course, the interesting part is the call to setOutputDirectory. We define an Xtend subclass as follows: classEntitiesOutputConfigurationProviderextends OutputConfigurationProvider {     public static valENTITIES_GEN = "./entities-gen"     overridegetOutputConfigurations() { super.getOutputConfigurations() => [ head.outputDirectory = ENTITIES_GEN     ]   } } Note that we use a public constant for the output directory since we might need it later in other classes. We use several Xtend features: the with operator, the implicit static extension method head, which returns the first element of a collection, and the syntactic sugar for setter method. We create this class in the main plug-in project, since this concept is not just an UI concept and it is used also in other parts of the framework. Since it deals with generation, we create it in the generatorsubpackage. Now, we must bind our implementation in the EntitiesRuntimeModule class: classEntitiesRuntimeModuleextends AbstractEntitiesRuntimeModule {   def Class<? extendsIOutputConfigurationProvider> bindIOutputConfigurationProvider() {     returnEntitiesOutputConfigurationProvider   } } If we now relaunch Eclipse, we can verify that the Java code is generated into entities-gen instead of src-gen. If you previously used the same project, the src-gen directory might still be there from previous generations; you need to manually remove it and set the new entities-gen as a source folder. Summary In this article, we introduced the Google Guice dependency injection framework on which Xtext relies. You should now be aware of how easy it is to inject custom implementations consistently throughout the framework. You also learned how to customize some basic runtime and IDE concepts for a DSL. Resources for Article: Further resources on this subject: Testing with Xtext and Xtend [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Java Development [article]
Read more
  • 0
  • 0
  • 4676

article-image-introducing-mobile-forensics
Packt
07 Sep 2016
21 min read
Save for later

Introducing Mobile Forensics

Packt
07 Sep 2016
21 min read
In this article by Oleg Afonin and Vladimir Katalov, the authors of the book Mobile Forensics – Advanced Investigative Strategies, we will see that today's smartphones are used less for calling and more for socializing, this has resulted in the smartphones holding a lot of sensitive data about their users. Mobile devices keep the user's contacts from a variety of sources (including the phone, social networks, instant messaging, and communication applications), information about phone calls, sent and received text messages, and e-mails and attachments. There are also browser logs and cached geolocation information; pictures and videos taken with the phone's camera; passwords to cloud services, forums, social networks, online portals, and shopping websites; stored payment data; and a lot of other information that can be vital for an investigation. (For more resources related to this topic, see here.) Needless to say, this information is very important for corporate and forensic investigations. In this book, we'll discuss not only how to gain access to all this data, but also what type of data may be available in each particular case. Tablets are no longer used solely as entertainment devices. Equipped with powerful processors and plenty of storage, even the smallest tablets are capable of running full Windows, complete with the Office suite. While not as popular as smartphones, tablets are still widely used to socialize, communicate, plan events, and book trips. Some smartphones are equipped with screens as large as 6.4 inches, while many tablets come with the ability to make voice calls over cellular network. All this makes it difficult to draw a line between a phone (or phablet) and a tablet. Every smartphone on the market has a camera that, unlike a bigger (and possibly better) camera, is always accessible. As a result, an average smartphone contains more photos and videos than a dedicated camera; sometimes, it's gigabytes worth of images and video clips. Smartphones are also storage devices. They can be used (and are used) to keep, carry, and exchange information. Smartphones connected to a corporate network may have access to files and documents not meant to be exposed. Uncontrolled access to corporate networks from employees' smartphones can (and does) cause leaks of highly-sensitive information. Employees come and go. With many companies allowing or even encouraging bring your own device policies, controlling the data that is accessible to those connecting to a corporate network is essential. What You Get Depends on What You Have Unlike personal computers that basically present a single source of information (the device itself consisting of hard drive(s) and volatile memory), mobile forensics deals with multiple data sources. Depending on the sources that are available, investigators may use one or the other tool to acquire information. The mobile device If you have access to the mobile device, you can attempt to perform physical or logical acquisition. Depending on the device itself (hardware) and the operating system it is running, this may or may not be possible. However, physical acquisition still counts as the most complete and up-to-date source of evidence among all available. Generally speaking, physical acquisition is available for most Android smartphones and tablets, older Apple hardware (iPhones up to iPhone 4, the original iPad, iPad mini, and so on), and recent Apple hardware with known passcode. As a rule, Apple devices can only be physically acquired if jailbroken. Since a jailbreak obtains superuser privileges by exploiting a vulnerability in iOS, and Apple actively fixes such vulnerabilities, physical acquisition of iOS devices remains iffy. A physical acquisition technique has been recently developed for some Windows phone devices using Cellebrite Universal Forensic Extraction Device (UFED). Physical acquisition is also available for 64-bit Apple hardware (iPhone 5S and newer, iPad mini 2, and so on). It is worth noting that physical acquisition of 64-bit devices is even more restrictive compared to the older 32-bit hardware, as it requires not only jailbreaking the device and unlocking it with a passcode, but also removing the said passcode from the security settings. Interestingly, according to Apple, even Apple itself cannot extract information from 64-bit iOS devices running iOS 8 and newer, even if they are served a court order. Physical acquisition is available on a limited number of BlackBerry smartphones running BlackBerry OS 7 and earlier. For BlackBerry smartphones, physical acquisition is available for unlocked BlackBerry 7 and lower devices, where supported, using Cellebrite UFED Touch/4PC through the bootloader method. For BlackBerry 10 devices where device encryption is not enabled, a chip-off can successfully acquire the device memory by parsing the physical dump using Cellebrite UFED. Personal computer Notably, the user's personal computer can help acquiring mobile evidence. The PC may contain the phone's offline data backups (such as those produced by Apple iTunes) that contain most of the information stored in the phone and available (or unavailable) during physical acquisition. Lockdown records are created when an iOS device is physically connected to the computer and authorized through iTunes. Lockdown records may be used to gain access to an iOS device without entering the passcode. In addition, the computer may contain binary authentication tokens that can be used to access respective cloud accounts linked to user's mobile devices. Access to cloud storage Many smartphones and tablets, especially those produced by Apple, offer the ability to back up information into an online cloud. Apple smartphones, for example, will automatically back up their content to Apple iCloud every time they are connected to a charger within the reach of a known Wi-Fi network. Windows phone devices exhibit similar behavior. Google, while not featuring full cloud backups like Apple or Microsoft, collects and retains even more information through Google Mobile Services (GMS). This information can also be pulled from the cloud. Since cloud backups are transparent, non-intrusive and require no user interaction, they are left enabled by default by many smartphone users, which makes it possible for an investigator to either acquire the content of the cloud storage or request it from the respective company with a court order. In order to successfully access the phone's cloud storage, one needs to know the user's authentication credentials (login and password). It may be possible to access iCloud by using binary authentication tokens extracted from the user's computer. With manufacturers quickly advancing in their security implementations, cloud forensics is quickly gaining importance and recognition among digital forensic specialists. Stages of mobile forensics This section will briefly discuss the general stages of mobile forensics and is not intended to provide a detailed explanation of each stage. There is more-than-sufficient documentation that can be easily accessed on the Internet that provides an intimate level of detail regarding the stages of mobile forensics. The most important concept for the reader to understand is this: have the least level of impact on the mobile device during all the stages. In other words, an examiner should first work on the continuum of the least-intrusive method to the most-intrusive method, which can be dictated by the type of data needing to be obtained from the mobile device and complexity of the hardware/software of the mobile device. Stage one – device seizure This stage pertains to the physical seizure of the device so that it comes under the control and custody of the investigator/examiner. Consideration must also be given to the legal authority or written consent to seize, extract, and search this data. The physical condition of the device at the time of seizure should be noted, ideally through digital photographic documentation and written notes, such as: Is the device damaged? If, yes, then document the type of damage. Is the device on or off? What is the device date and time if the device is on? If the device is on, what apps are running or observable on the device desktop? If the device is on, is the device desktop accessible to check for passcode and security settings? Several other aspects of device seizure are described in the following as they will affect post-seizure analysis: radio isolation, turning the device off if it is on, remote wipe, and anti-forensics. Seizing – what and how to seize? When it comes to properly acquiring a mobile device, one must be aware of the many differences in how computers and mobile devices operate. Seizing, handling, storing, and extracting mobile devices must follow a different route compared to desktop and even laptop computers. Unlike PCs that can be either online or offline (which includes energy-saving states of sleep and hibernation), smartphones and tablets use a different, always-connected modus of operandi. Tremendous amounts of activities are carried out in the background, even while the device is apparently sleeping. Activities can be scheduled or triggered by a large number of events, including push events from online services and events that are initiated remotely by the user. Another thing to consider when acquiring a mobile device is security. Mobile devices are carried around a lot, and they are designed to be inherently more secure than desktop PCs. Non-removable storage and soldered RAM chips, optional or enforced data encryption, remote kill switches, secure lock screens, and locked bootloaders are just a few security measures to be mentioned. The use of Faraday bags Faraday bags are commonly used to temporarily store seized devices without powering them down. A Faraday bag blocks wireless connectivity to cellular networks, Wi-Fi, Bluetooth, satellite navigation, and any other radios used in mobile devices. Faraday bags are normally designed to shield the range of radio frequencies used by local cellular carriers and satellite navigation (typically the 700-2,600 MHz), as well as the 2.4-5 GHz range used by Wi-Fi networks and Bluetooth. Many Faraday bags are made of specially-coated metallic shielding material that blocks a wide range of radio frequencies. Keeping the power on When dealing with a seized device, it is essential to prevent the device from powering off. Never switching off a working device is one thing, preventing it from powering down is another. Since mobile devices consume power even while the display is off, the standard practice is connecting the device to a charger and placing it into a wireless-blocking Faraday bag. This will prevent the mobile device from shutting down after reaching the low-power state. Why exactly do we need this procedure? The thing is, you may be able to extract more information from a device that was used (unlocked at least once) after the last boot cycle compared to a device that boots up in your lab and you don't know the passcode. To illustrate the potential outcome, let's say you seized an iPhone that is locked with an unknown passcode. The iPhone happens to be jailbroken, so you can attempt using Elcomsoft iOS Forensic Toolkit to extract information. If the device is locked and you don't know the passcode, you will have access to a very limited set of data: Recent geolocation information: Since the main location database remains encrypted, it is only possible to extract limited location data. This limited location data is only accessible if the device was unlocked at least once after the boot has completed. As a result, if you keep the device powered on, you may pull recent geolocation history from this device. If, however, the device shuts down and is only powered on in the lab, the geolocation data will remain inaccessible until the device is unlocked. Incoming calls (numbers only) and text messages: Incoming text messages are temporarily retained unencrypted before the first unlock after cold boot. Once the device is unlocked for the first time after cold boot, the messages will be transferred into the main encrypted database. This means that acquiring a device that was never unlocked after a cold start will only allow access to text messages received by the device during the time it remained locked after the boot. If the iPhone being acquired was unlocked at least once after it was booted (for example, if the device was seized in a turned-on state), you may be able to access significantly more information. The SMS database is decrypted on first unlock, allowing you pulling all text messages and not just those that were received while the device remained locked. App and system logs (installs and updates, net access logs, and so on). SQLite temp files, including write-ahead logs (WAL): These WAL may include messages received by applications such as Skype, Viber, Facebook Messenger, and so on. Once the device is unlocked, the data is merged with corresponding apps' main databases. When extracting a device after a cold boot (never unlocked), you will only have access to notifications received after the boot. If, however, you are extracting a device that was unlocked at least once after booting up, you may be able to extract the complete database with all messages (depending on the data protection class selected by the developer of a particular application). Dealing with the kill switch Mobile operating systems such as Apple iOS, recent versions of Google Android, all versions of BlackBerry OS, and Microsoft Windows phone 8/8.1 (Windows 10 mobile) have an important security feature designed to prevent unauthorized persons from accessing information stored in the device. The so-called kill switch enables the owner to lock or erase the device if the device is reported lost or stolen. While used by legitimate customers to safeguard their data, this feature is also used by suspects who may attempt to remotely destroy evidence if their mobile device is seized. In the recent Morristown man accused of remotely wiping nude photos of underage girlfriend on confiscated phone report (http://wate.com/2015/04/07/morristown-man-accused-of-remotely-wiping-nude-photos-of-underage-girlfriend-on-confiscated-phone/), the accused used the remote kill switch to wipe data stored on his iPhone. Using the Faraday bag is essential to prevent suspects from accessing the kill switch. However, even if the device in question has already been wiped remotely, it does not necessarily mean that all the data is completely lost. Apple iOS, Windows phone 8/8.1, Windows 10 mobile, and the latest version of Android (Android 6.0 Marshmallow) support cloud backups (albeit Android cloud backups contains limited amounts of data). When it comes to BlackBerry 10, the backups are strictly offline, yet the decryption key is tied to the user's BlackBerry ID and stored on BlackBerry servers. The ability to automatically upload backup copies of data into the cloud is a double-edged sword. While offering more convenience to the user, cloud backups make remote acquisition techniques possible. Depending on the platform, all or some information from the device can be retrieved from the cloud by either making use of a forensic tool (for example, Elcomsoft Phone Breaker, Oxygen Forensic Detective) or by serving a government request to the corresponding company (Apple, Google, Microsoft, or BlackBerry). Mobile device anti-forensics There are numerous anti-forensic methods that target evidence acquisition methods used by the law enforcement. It is common for the police to seize a device, connect it to a charger, and place into a Faraday bag. The anti-forensic method used by some technologically-advanced suspects on Android phones involves rooting the device and installing a tool that monitors wireless connectivity of the device. If the tool detects that the device has been idle, connected to a charger, and without wireless connectivity for a predefined period, it performs a factory reset. Since there is no practical way of determining whether such protection is active on the device prior to acquisition, simply following established guidelines presents a risk of evidence being destroyed. If there are reasonable grounds to suspect such a system may be in place, the device can be powered down (while realizing the risk of full-disk encryption preventing subsequent acquisition). While rooting or jailbreaking devices generally makes the device susceptible to advanced acquisition methods, we've seen users who unlocked their bootloader to install a custom recovery, protected access to this custom recovery with a password, and relocked the bootloader. Locked bootloader and password-protected access to custom recovery is an extremely tough combination to break. In several reports, we've become aware of the following anti-forensic technique used by a group of cyber criminals. The devices were configured to automatically wipe user data if certain predefined conditions were met. In this case, the predefined conditions triggering the wipe matched the typical acquisition scenario of placing the device inside a Faraday bag and connecting it to a charger. Once the device reported being charged without wireless connectivity (but not in airplane mode) for a certain amount of time, a special tool triggers a full factory reset of the device. Notably, this is only possible on rooted/jailbroken devices. So far, this anti-forensic technique did not receive a wide recognition. It's used by a small minority of smartphone users, mostly those into cybercrime. The low probability of a smartphone being configured that way is small enough to consider implementing changes to published guidelines. Stage two – data acquisition This stage refers to various methods of extracting data from the device. The methods of data extraction that can be employed are influenced by the following: Type of mobile device: The make, model, hardware, software, and vendor configuration. Availability of a diverse set of hardware and software extraction/analysis tools at the examiner's disposal: There is no tool that does it all, an examiner needs to have access to a number of tools that can assist with data extraction. Physical state of device: Has the device been exposed to damage, such as physical, water, biological fluids such as blood? Often the type of damage can dictate the types of data extraction measures that will be employed on the device. There are several different types of data extractions that determine how much data is obtained from the device: Physical: Binary image of the device has the most potential to recover deleted data and obtains the largest amount of data from the device. This can be the most challenging type of extraction to obtain. File system: This is a representation of the files and folders from the user area of the device, and can contain deleted data, specific to databases. This method will contain less data than a physical data extraction. Logical: This acquires the least amount of data from the device. Examples of this are call history, messages, contacts, pictures, movies, audio files, and so on. This is referred to as low-hanging fruit. No deleted data or source files are obtained. Often the resulting output will be a series of reports produced by the extraction tool. This is often the easiest and quickest type of extraction. Photographic documentation: This method is typically used when all other data extraction avenues are exhausted. In this procedure, the examiner uses a digital camera to photographically document the content being displayed by the device. This is a time-consuming method when there is an extensive amount of information to photograph. Specific data-extraction concepts are explained here: bootloader, jailbreak, rooting, adb, debug, and sim cloning. Root, jailbreak, and unlocked bootloader Rooting or jailbreaking the mobile devices in general makes them susceptible to a wide range of exploits. In the context of mobile forensics, rooted devices are easy to acquire since many forensic acquisition tools rely on root/jailbreak to perform physical acquisition. Devices with unlocked bootloaders allow booting unsigned code, effectively permitting full access to the device even if it's locked with a passcode. However, if the device is encrypted and the passcode is part of the encryption key, bypassing passcode protection may not automatically enable access to encrypted data. Rooting or jailbreaking enables unrestricted access to the filesystem, bypassing the operating system's security measures and allowing the acquisition tool to read information from protected areas. This is one of the reasons for banning rooted devices (as well as devices with unlocked bootloaders) from corporate premises. Installing a jailbreak on iOS devices always makes the phone less secure, enabling third-party code to be injected and run on a system level. This fact is well-known to forensic experts who make use of tools such as Cellebrite UFED or Elcomsoft iOS Forensic Toolkit to perform physical acquisition of jailbroken Apple smartphones. Some Android devices allow unlocking the bootloader, which enables easy and straightforward rooting of the device. While not all Android devices with unlocked bootloaders are rooted, installing root access during acquisition of a bootloader-unlocked device has a much higher chance of success compared to devices that are locked down. Tools such as Cellebrite UFED, Forensic Toolkit (FTK), Oxygen Forensic Suite, and many others can make use of the phone's root status in order to inject acquisition applets and image the device. Unlocked bootloaders can be exploited as well if you use UFED. A bootloader-level exploit exists and is used in UFED to perform acquisition of many Android and Windows phone devices based on the Qualcomm reference platform even if their bootloader is locked. Android ADB debugging Android has a hidden Developer Options menu. Accessing this menu requires a conscious effort of tapping on the OS build number multiple times. Some users enable Developer Options out of curiosity. Once enabled, the Developer Options menu may or may not be possible to hide. Among other things, the Developer Options menu lists an option called USB debugging or ADB debugging. If enabled, this option allows controlling the device via ADB command line, which in turn allows experts using Android debugging tools (adb.exe) to connect to the device from a PC even if it's locked with a passcode. Activated USB debugging exposes a lot of possibilities and can make acquisition possible even if the device is locked with a passcode. Memory card Most smartphone devices and tablets (except iOS devices) have the capability of increasing their storage capacity by using a microSD card. An examiner would remove the memory card from the mobile device/tablet and use either hardware or software write-protection methods, and create a bit stream forensic image of the memory card, which can then be analyzed using forensic software tools, such as X-Ways, Autopsy Sleuth Kit, Forensic Explorer (GetData), EnCase, or FTK (AccessData). Stage three – data analysis This stage of mobile device forensics entails analysis of the acquired data from the device and its components (SIM card and memory card if present). Most mobile forensic acquisition tools that acquire the data from the device memory can also parse the extracted data and provide the examiner functionality within the tool to perform analysis. This entails review of any non-deleted and deleted data. When reviewing non-deleted data, it would be prudent to also perform a manual review of the device to ensure that the extracted and parsed data matches what is displayed by the device. As mobile device storage capacities have increased, it is suggested that a limited subset of data records from the relevant areas be reviewed. So, for example, if a mobile device has over 200 call records, reviewing several call records from missed calls, incoming calls, and outgoing calls can be checked on the device in relation to the similar records in the extracted data. By doing this manual review, it is then possible to discover any discrepancies in the extracted data. Manual device review can only be completed when the device is still in the custody of the examiner. There are situations where, after the data extraction has been completed, the device is released back to the investigator or owner. In situations such as this, the examiner should document that very limited or no manual verification can be performed due to these circumstances. Finally, the reader should be keenly aware that more than one analysis tool can be used to analyze the acquired data. Multiple analysis tools should be considered, especially when a specific type of data cannot be parsed by one tool, but can be analyzed by another. Summary In this article, we've covered the basics of mobile forensics. We discussed the amount of evidence available in today's mobile devices and covered the general steps of mobile forensics. We also discussed how to seize, handle, and store mobile devices, and looked at how criminals can use technology to prevent forensic access. We provided a general overview of the acquisition and analysis steps. For more information on mobile forensics, you can refer to the following books by Packt: Practical Mobile Forensics - Second Edition: https://www.packtpub.com/networking-and-servers/practical-mobile-forensics-second-edition Mastering Mobile Forensics: https://www.packtpub.com/networking-and-servers/mastering-mobile-forensics Learning iOS Forensics: https://www.packtpub.com/networking-and-servers/learning-ios-forensics  Resources for Article: Further resources on this subject: Mobile Forensics [article] Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 22477
article-image-introducing-penetration-testing
Packt
07 Sep 2016
28 min read
Save for later

Introducing Penetration Testing

Packt
07 Sep 2016
28 min read
In this article by Kevin Cardwell, the author of the book Building Virtual Pentesting Labs for Advanced Penetration Testing, we will discuss the role that pen testing plays in the professional security testing framework. We will discuss the following topics: Defining security testing An abstract security testing methodology Myths and misconceptions about pen testing (For more resources related to this topic, see here.) If you have been doing penetration testing for some time and are very familiar with the methodology and concept of professional security testing, you can skip this article or just skim it. But you might learn something new or at least a different approach to penetration testing. We will establish some fundamental concepts in this article. Security testing If you ask 10 consultants to define what security testing is today, you will more than likely get a variety of responses. Here is the Wikipedia definition: "Security testing is a process and methodology to determine that an information system protects and maintains functionality as intended." In my opinion, this is the most important aspect of penetration testing. Security is a process and not a product. I'd also like to add that it is a methodology and not a product. Another component to add to our discussion is the point that security testing takes into account the main areas of a security model. A sample of this is as follows: Authentication Authorization Confidentiality Integrity Availability Non-repudiation Each one of these components has to be considered when an organization is in the process of securing their environment. Each one of these areas in itself has many subareas that also have to be considered when it comes to building a secure architecture. The lesson is that when testing security, we have to address each of these areas. Authentication It is important to note that almost all systems and/or networks today have some form of authentication and, as such, it is usually the first area we secure. This could be something as simple as users selecting a complex password or us adding additional factors to authentication, such as a token, biometrics, or certificates. No single factor of authentication is considered to be secure by itself in today's networks. Authorization Authorization is often overlooked since it is assumed and not a component of some security models. That is one approach to take, but it's preferred to include it in most testing models, as the concept of authorization is essential since it is how we assign the rights and permissions to access a resource, and we would want to ensure it is secure. Authorization enables us to have different types of users with separate privilege levels coexist within a system. We do this when we have the concept of discretionary access, where a user can have administrator privileges on a machine or assume the role of an administrator to gain additional rights or permissions, whereas we might want to provide limited resource access to a contractor. Confidentiality Confidentiality is the assurance that something we want to be protected on the machine or network is safe and not at risk of being compromised. This is made harder by the fact that the protocol (TCP/IP) running the Internet today is a protocol that was developed in the early 1970s. At that time, the Internet consisted of just a few computers, and now, even though the Internet has grown to the size it is today, we are still running the same protocol from those early days. This makes it more difficult to preserve confidentiality. It is important to note that when the developers created the protocol and the network was very small, there was an inherent sense of trust on who you could potentially be communicating with. This sense of trust is what we continue to fight from a security standpoint today. The concept from that early creation was and still is that you can trust data received to be from a reliable source. We know now that the Internet is at this huge size and that is definitely not the case. Integrity Integrity is similar to confidentiality, in that we are concerned with the compromising of information. Here, we are concerned with the accuracy of data and the fact that it is not modified in transit or from its original form. A common way of doing this is to use a hashing algorithm to validate that the file is unaltered. Availability One of the most difficult things to secure is availability, that is, the right to have a service when required. The irony of availability is that a particular resource is available to one user, and it is later available to all. Everything seems perfect from the perspective of an honest/legitimate user. However, not all users are honest/legitimate, and due to the sheer fact that resources are finite, they can be flooded or exhausted; hence, is it is more difficult to protect this area. Non-repudiation Non-repudiation makes the claim that a sender cannot deny sending something after the fact. This is the one I usually have the most trouble with, because a computer system can be compromised and we cannot guarantee that, within the software application, the keys we are using for the validation are actually the ones being used. Furthermore, the art of spoofing is not a new concept. With these facts in our minds, the claim that we can guarantee the origin of a transmission by a particular person from a particular computer is not entirely accurate. Since we do not know the state of the machine with respect to its secureness, it would be very difficult to prove this concept in a court of law. All it takes is one compromised machine, and then the theory that you can guarantee the sender goes out the window. We won't cover each of the components of security testing in detail here, because that is beyond the scope of what we are trying to achieve. The point I want to get across in this article is that security testing is the concept of looking at each and every one of these and other components of security, addressing them by determining the amount of risk an organization has from them, and then mitigating that risk. An abstract testing methodology As mentioned previously, we concentrate on a process and apply that to our security components when we go about security testing. For this, I'll describe an abstract methodology here. We will define our testing methodology as consisting of the following steps: Planning Nonintrusive target search Intrusive target search Data analysis Reporting Planning Planning is a crucial step of professional testing. But, unfortunately, it is one of the steps that is rarely given the time that is essentially required. There are a number of reasons for this, but the most common one is the budget: clients do not want to provide consultants days and days to plan their testing. In fact, planning is usually given a very small portion of the time in the contract due to this reason. Another important point about planning is that a potential adversary is going to spend a lot of time on it. There are two things we should tell clients with respect to this step that as a professional tester we cannot do but an attacker could: 6 to 9 months of planning: The reality is that a hacker who targets someone is going to spend a lot of time before the actual attack. We cannot expect our clients to pay us for 6 to 9 months of work just to search around and read on the Internet. Break the law: We could break the law and go to jail, but it is not something that is appealing for most. Additionally, being a certified hacker and licensed penetration tester, you are bound to an oath of ethics, and you can be pretty sure that breaking the law while testing is a violation of this code of ethics. Nonintrusive target search There are many names that you will hear for nonintrusive target search. Some of these are open source intelligence, public information search, and cyber intelligence. Regardless of which name you use, they all come down to the same thing: using public resources to extract information about the target or company you are researching. There is a plethora of tools that are available for this. We will briefly discuss those tools to get an idea of the concept, and those who are not familiar with them can try them out on their own. Nslookup The nslookup tool can be found as a standard program in the majority of the operating systems we encounter. It is a method of querying DNS servers to determine information about a potential target. It is very simple to use and provides a great deal of information. Open a command prompt on your machine, and enter nslookup www.packtpub.com. This will result in output like the following screenshot:  As you can see, the response to our command is the IP address of the DNS server for the www.packtpub.com domain. If we were testing this site, we would have explored this further. Alternatively, we may also use another great DNS-lookup tool called dig. For now, we will leave it alone and move to the next resource. Central Ops The https://centralops.net/co/ website has a number of tools that we can use to gather information about a potential target. There are tools for IP, domains, name servers, e-mail, and so on. The landing page for the site is shown in the next screenshot: The first thing we will look at in the tool is the ability to extract information from a web server header page: click on TcpQuery, and in the window that opens, enter www.packtpub.com and click on Go. An example of the output from this is shown in the following screenshot: As the screenshot shows, the web server banner has been modified and says packt. If we do the query against the www.packtpub.com domain we have determined that the site is using the Apache web server, and the version that is running; however we have much more work to do in order to gather enough information to target this site. The next thing we will look at is the capability to review the domain server information. This is accomplished by using the domain dossier. Return to the main page, and in the Domain Dossier dialog box, enter yahoo.com and click on go. An example of the output from this is shown in the following screenshot:  There are many tools we could look at, but again, we just want to briefly acquaint ourselves with tools for each area of our security testing procedure. If you are using Windows and you open a command prompt window and enter tracert www.microsoft.com, you will observe that it fails, as indicated in this screenshot:  The majority of you reading this article probably know why this is blocked; for those of you who do not, it is because Microsoft has blocked the ICMP protocol, which is what the tracert command uses by default. It is simple to get past this because the server is running services; we can use those protocols to reach it, and in this case, that protocol is TCP. If you go to http://www.websitepulse.com/help/testtools.tcptraceroute-test.html and enter www.microsoft.com in the IP address/domain field with the default location and conduct the TCP Traceroute test, you will see it will now be successful, as shown in the following screenshot: As you can see, we now have additional information about the path to the potential target; moreover, we have additional machines to add to our target database as we conduct our test within the limits of the rules of engagement. The Wayback Machine The Wayback Machine is proof that nothing that has ever been on the Internet leaves! There have been many assessments in which a client informed the team that they were testing a web server that hadn't placed into production, and when they were shown the site had already been copied and stored, they were amazed that this actually does happen. I like to use the site to download some of my favorite presentations, tools, and so on, that have been removed from a site or, in some cases, whose site no longer exists. As an example, one of the tools used to show students the concept of steganography is the tool infostego. This tool was released by Antiy Labs, and it provided students an easy-to-use tool to understand the concepts. Well, if you go to their site at http://www.antiy.net/, you will find no mention of the tool—in fact, it will not be found on any of their pages. They now concentrate more on the antivirus market. A portion from their page is shown in the following screenshot: Now, let's try and use the power of the Wayback Machine to find our software. Open the browser of your choice and go to www.archive.org. The Wayback Machine is hosted there and can be seen in the following screenshot: As indicated, there are 491 billion pages archived at the time of writing this article. In the URL section, enter www.antiy.net and hit Enter. This will result in the site searching its archives for the entered URL. After a few moments, the results of the search will be displayed. An example of this is shown in the following screenshot: We know we don't want to access a page that has been recently archived, so to be safe, click on 2008. This will result in the calendar being displayed and showing all the dates in 2008 on which the site was archived. You can select any one that you want; an example of the archived site from December 18 is shown in the following screenshot: as you can see, the infostego tool is available, and you can even download it! Feel free to download and experiment with the tool if you like. Shodan The Shodan site is one of the most powerful cloud scanners available. You are required to register with the site to be able to perform the more advanced types of queries. To access the site, go to https://www.shodan.io/. It is highly recommended that you register, since the power of the scanner and the information you can discover is quite impressive, especially after registration. The page that is presented once you log in is shown in the following screenshot: The screenshot shows recently shared search queries as well as the most recent searches the logged-in user has conducted. This is another tool you should explore deeply if you do professional security testing. For now, we will look at one example and move on, since an entire article could be written just on this tool. If you are logged in as a registered user, you can enter iphone us into the search query window. This will return pages with iphone in the query and mostly in the United States, but as with any tool, there will be some hits on other sites as well. An example of the results of this search is shown in the following screenshot: Intrusive target search This is the step that starts the true hacker-type activity. This is when you probe and explore the target network; consequently, ensure that you have with you explicit written permission to carry out this activity. Never perform an intrusive target search without permission, as this written authorization is the only aspect which differentiates you and a malicious hacker. Without it, you are considered a criminal like them. Within this step, there are a number of components that further define the methodology. Find live systems No matter how good our skills are, we need to find systems that we can attack. This is accomplished by probing the network and looking for a response. One of the most popular tools to do this with is the excellent open source tool nmap, written by Fyodor. You can download nmap from https://nmap.org/, or you can use any number of toolkit distributions for the tool. We will use the exceptional penetration-testing framework Kali Linux. You can download the distribution from https://www.kali.org/. Regardless of which version of nmap you explore with, they all have similar, if not the same, command syntax. In a terminal window, or a command prompt window if you are running it on Windows, type nmap –sP <insert network IP address>. The network we are scanning is the 192.168.4.0/24 network; yours more than likely will be different. An example of this ping sweep command is shown in the following screenshot:  We now have live systems on the network that we can investigate further. For those of you who would like a GUI tool, you can use Zenmap. Discover open ports Now that we have live systems, we want to see what is open on these machines. A good analogy to a port is a door, and it's that if the door is open, I can approach it. There might be things that I have to do once I get to the door to gain access, but if it is open, then I know it is possible to get access, and if it is closed, then I know I cannot go through that door. Furthermore, we might need to know the type of lock that is on the door, because it might have weaknesses or additional protection that we need to know about. The same is with ports: if they are closed, then we cannot go into that machine using that port. We have a number of ways to check for open ports, and we will continue with the same theme and use nmap. We have machines that we have identified, so we do not have to scan the entire network as we did previously—we will only scan the machines that are up. Additionally, one of the machines found is our own machine; therefore, we will not scan ourselves—we could, but it's not the best plan. The targets that are live on our network are 1, 2, 16, and 18. We can scan these by entering nmap –sS 192.168.4.1,2,16,18. Those of you who want to learn more about the different types of scans can refer to http://nmap.org/book/man-port-scanning-techniques.html. Alternatively, you can use the nmap –h option to display a list of options. The first portion of the stealth scan (not completing the three-way handshake) result is shown in the following screenshot:   Discover services We now have live systems and openings that are on the machine. The next step is to determine what, if anything, is running on the ports we have discovered—it is imperative that we identify what is running on the machine so that we can use it as we progress deeper into our methodology. We once again turn to nmap. In most command and terminal windows, there is history available; hopefully, this is the case for you and you can browse through it with the up and down arrow keys on your keyboard. For our network, we will enter nmap –sV 192.168.4.1. From our previous scan, we've determined that the other machines have all scanned ports closed, so to save time, we won't scan them again. An example of this is shown in the following screenshot: From the results, you can now see that we have additional information about the ports that are open on the target. We could use this information to search the Internet using some of the tools we covered earlier, or we could let a tool do it for us. Enumeration Enumeration is the process of extracting more information about the potential target to include the OS, usernames, machine names, and other details that we can discover. The latest release of nmap has a scripting engine that will attempt to discover a number of details and in fact enumerate the system to some aspect. To process the enumeration with nmap, use the -A option. Enter nmap -A 192.168.4.1. Remember that you will have to enter your respective target address, which might be different from the one mentioned here. Also, this scan will take some time to complete and will generate a lot of traffic on the network. If you want an update, you can receive one at any time by pressing the spacebar. This command's output is quite extensive; so a truncated version is shown in the following screenshot:   As you can see, you have a great deal of information about the target, and you are quite ready to start the next phase of testing. Additionally, we have the OS correctly identified; until this step, we did not have that. Identify vulnerabilities After we have processed the steps up to this point, we have information about the services and versions of the software that are running on the machine. We could take each version and search the Internet for vulnerabilities, or we could use a tool—for our purposes, we will choose the latter. There are numerous vulnerability scanners out there in the market, and the one you select is largely a matter of personal preference. The commercial tools for the most part have a lot more information and details than the free and open source ones, so you will have to experiment and see which one you prefer. We will be using the Nexpose vulnerability scanner from Rapid7. There is a community version of their tool that will scan a limited number of targets, but it is worth looking into. You can download Nexpose from http://www.rapid7.com/. Once you have downloaded it, you will have to register, and you'll receive a key by e-mail to activate it. I will leave out the details of this and let you experience them on your own. Nexpose has a web interface, so once you have installed and started the tool, you have to access it. You can access it by entering https://localhost:3780. It seems to take an extraordinary amount of time to initialize, but eventually, it will present you with a login page, as shown in the following screenshot: The credentials required for login will have been created during the installation. It is quite an involved process to set up a scan, and since we are just detailing the process and there is an excellent quick start guide available, we will just move on to the results of the scan. We will have plenty of time to explore this area as the article progresses. The result of a typical scan is shown in the following screenshot: As you can see, the target machine is in bad shape. One nice thing about Nexpose is the fact that since they also own Metasploit, they will list the vulnerabilities that have a known exploit within Metasploit. Exploitation This is the step of the security testing that gets all the press, and it is, in simple terms, the process of validating a discovered vulnerability. It is important to note that it is not a 100-percent successful process—some vulnerabilities will not have exploits and some will have exploits for a certain patch level of the OS but not others. As I like to say, it is not an exact science and in reality is an infinitesimal part of professional security testing, but it is fun, so we will briefly look at the process. We also like to say in security testing that we have to validate and verify everything a tool reports to our client, and that is what we try to do with exploitation. The point is that you are executing a piece of code on a client's machine, and this code could cause damage. The most popular free tool for exploitation is the Rapid7-owned tool Metasploit. There are entire articles written on using the tool, so we will just look at the results of running it and exploiting a machine here. As a reminder, you have to have written permission to do this on any network other than your own; if in doubt, do not attempt it. Let's look at the options: There is quite a bit of information in the options. The one we will cover is the fact that we are using the exploit for the MS08-067 vulnerability, which is a vulnerability in the server service. It is one of the better ones to use as it almost always works and you can exploit it over and over again. If you want to know more about this vulnerability, you can check it out here: http://technet.microsoft.com/en-us/security/bulletin/ms08-067. Since the options are set, we are ready to attempt the exploit, and as indicated in the following screenshot, we are successful and have gained a shell on the target machine. The process for this we will cover as we progress through the article. For now, we will stop here. Here onward, it is only your imagination that can limit you. The shell you have opened is running at system privileges; therefore, it is the same as running a Command Prompt on any Windows machine with administrator rights, so whatever you can do in that shell, you can also do in this one. You can also do a number of other things, which you will learn as we progress through the article. Furthermore, with system access, we can plant code as malware: a backdoor or really anything we want. While we might not do that as a professional tester, a malicious hacker could do it, and this would require additional analysis to discover on the client's end. Data analysis Data analysis is often overlooked, and it can be a time-consuming process. This is the process that takes the most time to develop. Most testers can run tools and perform manual testing and exploitation, but the real challenge is taking all of the results and analyzing them. We will look at one example of this in the next screenshot. Take a moment and review the protocol analysis captured with the tool Wireshark—as an analyst, you need to know what the protocol analyzer is showing you. Do you know what exactly is happening? Do not worry, I will tell you after we have a look at the following screenshot: You can observe that the machine with the IP address 192.168.3.10 is replying with an ICMP packet that is type 3 code 13; in other words, the reason the packet is being rejected is because the communication is administratively filtered. Furthermore, this tells us that there is a router in place and it has an access control list (ACL) that is blocking the packet. Moreover, it tells us that the administrator is not following best practices— absorbing packets and not replying with any error messages that can assist an attacker. This is just a small example of the data analysis step; there are many things you will encounter and many more that you will have to analyze to determine what is taking place in the tested environment. Remember: the smarter the administrator, the more challenging pen testing can become—which is actually a good thing for security! Reporting Reporting is another one of the areas in testing that is often overlooked in training classes. This is unfortunate since it is one of the most important things you need to master. You have to be able to present a report of your findings to the client. These findings will assist them in improving their security practices, and if they like the report, it is what they will most often share with partners and other colleagues. This is your advertisement for what separates you from others. It is a showcase that not only do you know how to follow a systematic process and methodology of professional testing, you also know how to put it into an output form that can serve as a reference going forward for the clients. At the end of the day, as professional security testers, we want to help our clients improve their security scenario, and that is where reporting comes in. There are many references for reports, so the only thing we will cover here is the handling of findings. There are two components we use when it comes to findings, the first of which is a summary-of-findings table. This is so the client can reference the findings early on in the report. The second is the detailed findings section. This is where we put all of the information about the findings. We rate them according to severity and include the following: Description This is where we provide the description of the vulnerability, specifically, what it is and what is affected. Analysis and exposure For this article, you want to show the client that you have done your research and aren't just repeating what the scanning tool told you. It is very important that you research a number of resources and write a good analysis of what the vulnerability is, along with an explanation of the exposure it poses to the client site. Recommendations We want to provide the client a reference to the patches and measures to apply in order to mitigate the risk of discovered vulnerabilities. We never tell the client not to use the service and/or protocol! We do not know what their policy is, and it might be something they have to have in order to support their business. In these situations, it is our job as consultants to recommend and help the client determine the best way to either mitigate the risk or remove it. When a patch is not available, we should provide a reference to potential workarounds until one is available. References If there are references such as a Microsoft bulletin number or a Common Vulnerabilities and Exposures (CVE) number, this is where we would place them. Myths and misconceptions about pen testing After more than 20 years of performing professional security testing, it is amazing to me really how many are confused as to what a penetration test is. I have on many occasions gone to a meeting where the client is convinced they want a penetration test, and when I explain exactly what it is, they look at me in shock. So, what exactly is a penetration test? Remember our abstract methodology had a step for intrusive target searching and part of that step was another methodology for scanning? Well, the last item in the scanning methodology, exploitation, is the step that is indicative of a penetration test. That's right! That one step is the validation of vulnerabilities, and this is what defines penetration testing. Again, it is not what most clients think when they bring a team in. The majority of them in reality want a vulnerability assessment. When you start explaining to them that you are going to run exploit code and all these really cool things on their systems and/or networks, they usually are quite surprised. The majority of the times, the client will want you to stop at the validation step. On some occasions, they will ask you to prove what you have found, and then you might get to show validation. I once was in a meeting with the IT department of a foreign country's stock market, and when I explained what we were about to do for validating vulnerabilities, the IT director's reaction was, "Those are my stock broker records, and if we lose them, we lose a lot of money!" Hence, we did not perform the validation step in that test. Summary In this article, we defined security testing as it relates to this article, and we identified an abstract methodology that consists of the following steps: planning, nonintrusive target search, intrusive target search, data analysis, and reporting. More importantly, we expanded the abstract model when it came to intrusive target searching, and we defined within that a methodology for scanning. This consisted of identifying live systems, looking at open ports, discovering services, enumeration, identifying vulnerabilities, and finally, exploitation. Furthermore, we discussed what a penetration test is and that it is a validation of vulnerabilities and is associated with one step in our scanning methodology. Unfortunately, most clients do not understand that when you validate vulnerabilities, it requires you to run code that could potentially damage a machine or, even worse, damage their data. Because of this, once they discover this, most clients ask that it not be part of the test. We created a baseline for what penetration testing is in this article, and we will use this definition throughout the remainder of this article. In the next article, we will discuss the process of choosing your virtual environment.  Resources for Article:  Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 27384

article-image-mapping-requirements-modular-web-shop-app
Packt
07 Sep 2016
11 min read
Save for later

Mapping Requirements for a Modular Web Shop App

Packt
07 Sep 2016
11 min read
In this article by Branko Ajzele, author of the book Modular Programming with PHP 7, we will be building a software application from the ground up requires diverse skills, as it involves more than just writing down a code. Writing down functional requirements and sketching out a wireframe are often among the first steps in the process, especially if we are working on a client project. These steps are usually done by roles other than the developer, as they require certain insight into client business case, user behavior, and alike. Being part of a larger development team means that, we as developers, usually get requirements, designs, and wireframes then start coding against them. Delivering projects by oneself, makes it tempting to skip these steps and get our hands started with code alone. More often than not, this is an unproductive approach. Laying down functional requirements and a few wireframes is a skill worth knowing and following, even if one is just a developer. (For more resources related to this topic, see here.) Later in this article, we will go over a high-level application requirement, alongside a rough wireframe. In this article, we will be covering the following topics: Defining application requirements Wireframing Defining a technology stack Defining application requirements We need to build a simple, but responsive web shop application. In order to do so, we need to lay out some basic requirements. The types of requirements we are interested in at the moment are those that touch upon interactions between a user and a system. The two most common techniques to specify requirements in regards to user usage are use case and user story. The user stories are a less formal, yet descriptive enough way to outline these requirements. Using user stories, we encapsulate the customer and store manager actions as mentioned here. A customer should be able to do the following: Browse through static info pages (about us, customer service) Reach out to the store owner via a contact form Browse the shop categories See product details (price, description) See the product image with a large view (zoom) See items on sale See best sellers Add the product to the shopping cart Create a customer account Update customer account info Retrieve a lost password Check out See the total order cost Choose among several payment methods Choose among several shipment methods Get an email notification after an order has been placed Check order status Cancel an order See order history A store manager should be able to do the following: Create a product (with the minimum following attributes: title, price, sku, url-key, description, qty, category, and image) Upload a picture to the product Update and delete a product Create a category (with the minimum following attributes: title, url-key, description, and image) Upload a picture to a category Update and delete a category Be notified if a new sales order has been created Be notified if a new sales order has been canceled See existing sales orders by their statuses Update the status of the order Disable a customer account Delete a customer account User stories are a convenient high-level way of writing down application requirements. Especially useful as an agile mode of development. Wireframing With user stories laid out, let's shift our focus to actual wireframing. For reasons we will get into later on, our wireframing efforts will be focused around the customer perspective. There are numerous wireframing tools out there, both free and commercial. Some commercial tools like https://ninjamock.com, which we will use for our examples, still provide a free plan. This can be very handy for personal projects, as it saves us a lot of time. The starting point of every web application is its home page. The following wireframe illustrates our web shop app's homepage: Here we can see a few sections determining the page structure. The header is comprised of a logo, category menu, and user menu. The requirements don't say anything about category structure, and we are building a simple web shop app, so we are going to stick to a flat category structure, without any sub-categories. The user menu will initially show Register and Login links, until the user is actually logged in, in which case the menu will change as shown in following wireframes. The content area is filled with best sellers and on sale items, each of which have an image, title, price, and Add to Cart button defined. The footer area contains links to mostly static content pages, and a Contact Us page. The following wireframe illustrates our web shop app's category page: The header and footer areas remain conceptually the same across the entire site. The content area has now changed to list products within any given category. Individual product areas are rendered in the same manner as it is on the home page. Category names and images are rendered above the product list. The width of a category image gives some hints as to what type of images we should be preparing and uploading onto our categories. The following wireframe illustrates our web shop app's product page: The content area here now changes to list individual product information. We can see a large image placeholder, title, sku, stock status, price, quantity field, Add to Cart button, and product description being rendered. The IN STOCK message is to be displayed when an item is available for purchase and OUT OF STOCK when an item is no longer available. This is to be related to the product quantity attribute. We also need to keep in mind the "See the product image with a big view (zoom)" requirement, where clicking on an image would zoom into it. The following wireframe illustrates our web shop app's register page: The content area here now changes to render a registration form. There are many ways that we can implement the registration system. More often than not, the minimal amount of information is asked on a registration screen, as we want to get the user in as quickly as possible. However, let's proceed as if we are trying to get more complete user information right here on the registration screen. We ask not just for an e-mail and password, but for entire address information as well. The following wireframe illustrates our web shop app's login page: The content area here now changes to render a customer login and forgotten password form. We provide the user with Email and Password fields in case of login, or just an Email field in case of a password reset action. The following wireframe illustrates our web shop app's customer account page: The content area here now changes to render the customer account area, visible only to logged in customers. Here we see a screen with two main pieces of information. The customer information being one, and order history being the other. The customer can change their e-mail, password, and other address information from this screen. Furthermore, the customer can view, cancel, and print all of their previous orders. The My Orders table lists orders top to bottom, from newest to oldest. Though not specified by the user stories, the order cancelation should work only on pending orders. This is something that we will touch upon in more detail later on. This is also the first screen that shows the state of the user menu when the user is logged in. We can see a dropdown showing the user's full name, My Account, and Sign Out links. Right next to it, we have the Cart (%s) link, which is to list exact quantities in a cart. The following wireframe illustrates our web shop app's checkout cart page: The content area here now changes to render the cart in its current state. If the customer has added any products to the cart, they are to be listed here. Each item should list the product title, individual price, quantity added, and subtotal. The customer should be able to change quantities and press the Update Cart button to update the state of the cart. If 0 is provided as the quantity, clicking the Update Cart button will remove such an item from the cart. Cart quantities should at all time reflect the state of the header menu Cart (%s) link. The right-hand side of a screen shows a quick summary of current order total value, alongside a big, clear Go to Checkout button. The following wireframe illustrates our web shop app's checkout cart shipping page: The content area here now changes to render the first step of a checkout process, the shipping information collection. This screen should not be accessible for non-logged in customers. The customer can provide us with their address details here, alongside a shipping method selection. The shipping method area lists several shipping methods. On the right hand side, the collapsible order summary section is shown, listing current items in the cart. Below it, we have the cart subtotal value and a big clear Next button. The Next button should trigger only when all of the required information is provided, in which case it should take us to payment information on the checkout cart payment page. The following wireframe illustrates our web shop app's checkout cart payment page: The content area here now changes to render the second step of a checkout process, the payment information collection. This screen should not be accessible for non-logged in customers. The customer is presented with a list of available payment methods. For the simplicity of the application, we will focus only on flat/fixed payments, nothing robust such as PayPal or Stripe. On the right-hand side of the screen, we can see a collapsible Order summary section, listing current items in the cart. Below it, we have the order totals section, individually listing Cart Subtotal, Standard Delivery, Order Total, and a big clear Place Order button. The Place Order button should trigger only when all of the required information is provided, in which case it should take us to the checkout success page. The following wireframe illustrates our web shop app's checkout success page: The content area here now changes to output the checkout successful message. Clearly this page is only visible to logged in customers that just finished the checkout process. The order number is clickable and links to the My Account area, focusing on the exact order. By reaching this screen, both the customer and store manager should receive a notification email, as per the Get email notification after order has been placed and Be notified if the new sales order has been created requirements. With this, we conclude our customer facing wireframes. In regards to store manager user story requirements, we will simply define a landing administration interface for now, as shown in the following screenshot: Using the framework later on, we will get a complete auto-generated CRUD interface for the multiple Add New and List & Manage links. The access to this interface and its links will be controlled by the framework's security component, since this user will not be a customer or any user in the database as such. Defining a technology stack Once the requirements and wireframes are set, we can focus our attention to the selection of a technology stack. Choosing the right one in this case, is more of a matter of preference, as application requirements for the most part can be easily met by any one of those frameworks. Our choice however, falls onto Symfony. Aside from PHP frameworks, we still need a CSS framework to deliver some structure, styling, and responsiveness within the browser on the client side. Since the focus of this book is on PHP technologies, let's just say we choose the Foundation CSS framework for that task. Summary Creating web applications can be a tedious and time consuming task. Web shops probably being one of the most robust and intensive type of application out there, as they encompass a great deal of features. There are many components involved in delivering the final product; from database, server side (PHP) code to client side (HTML, CSS, and JavaScript) code. In this article, we started off by defining some basic user stories which in turn defined high-level application requirements for our small web shop. Adding wireframes to the mix helped us to visualize the customer facing interface, while the store manager interface is to be provided out of the box by the framework. We further glossed over two of the most popular frameworks that support modular application design. We turned our attention to Symfony as server side technology and Foundation as a client side responsive framework. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Understanding PHP basics [article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 10526
Modal Close icon
Modal Close icon