Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-active-directory-domain-services-2016
Packt
09 May 2017
23 min read
Save for later

Active Directory Domain Services 2016

Packt
09 May 2017
23 min read
In this article, by Dishan Francis, the author of the book Mastering Active Directory, we will see AD DS features, privileged access management, time based group memberships. Microsoft, released Active Directory domain services 2016 at a very interesting time in technology. Today identity infrastructure requirements for enterprise are challenging, most of the companies uses cloud services for their operations (Software as a Service—SaaS) and lots moved infrastructure workloads to public clouds. (For more resources related to this topic, see here.) AD DS 2016 features Active Directory domain service (AD DS) improvements are bind with its forest and domain functional levels. Upgrading operating system or adding domain controllers which runs Windows Server 2016 to existing AD infrastructure not going to upgrade forest and domain functional levels. In order to use or test these new AD DS 2016 features you need to have forest and domain function levels set to Windows Server 2016. The minimum forest and domain functional levels you can run on your identity infrastructure depend on the lowest domain controller version running. For example, if you have Windows Server 2008 domain controller in your infrastructure, even though you add Windows Server 2016 domain controller, the domain and forest functional level need to maintain as Windows Server 2008 until last Windows Server 2008 demote from the infrastructure. Privileged access management Privileged access management (PAM) is one of the best topics which is discussed on presentations, tech shows, IT forums, IT groups, blogs and meetings for last few years (after 2014) around identity management. It has become a trending topic especially after the Windows Server 2016 previews released. For last year, I was travelling to countries, cities and had involved with many presentations, discussions about PAM.  First of all, this is not a feature that you can enable with few clicks. It is a combination of many technologies and methodologies which came together and make a workflow or in other words way of living for administrators. AD DS 2016 includes features and capabilities that support PAM in infrastructure but it is not the only thing. This is one of the greatest challenge I see about this new way of thinking and new way of working. Replacing a product is easy but changing a process is more complicated and challenging.   I started my career with one of the largest north American hosting company around 2003. I was a system administrator that time and one of my tasks was to identify hacking attempts and prevent workloads getting compromised. In order to do that I had to review lot of logs on different systems. But around that time most of the attacks from individual or groups were to put names on websites and prove that they can hack websites. Average hacking attempts per server was around 20 to 50 per day. Some collocation customers were even running their websites, workloads without any protection (even though not recommended). But as the time goes year by year number of attempts were dramatically increased and we start to talk about hundreds of thousands attempts per day. The following graph is taken from latest Symantec Internet Security Threat Report (2016) and it confirms number of web-based attacks increased by more than 117% from year 2014.  Web attacks blocked per month (Source - Symantec Internet Security Threat Report (2016)) It has not only changed the numbers, it also changed the purpose of attacks. As I said in earlier days it was script kiddies who were after fame. Then later as users started to use more and more online services, purpose of attacks changed to financial values. Attackers started to focus on websites which stores credit card information. For last 10 years, I had to change my credit card 4 times as my credit card information were exposed along with the websites I had used it with. These type of attacks are still happening in the industry.  When considering the types of threats after the year 2012, most of the things changed. Instead of fame or financial, attackers started to target identities. In earlier days, the data about a person were in different formats. For example, when I used to walk into my medical center 15 years ago, before seeing the doctor, administration staff had to go and find the file containing my name. They had number of racks filled with files and papers which included patient records, treatment history, test reports, and so on. But now things have changed, when I walk in, no one in administration need to worry about the file. Doctor can see all my records from his computer screen with few clicks. So, the data is being transformed into the digital format. More and more data about people is transforming into digital formats. In that health system, I become an identity and my identity is attached to the data and also to a certain privileges. Think about your bank, online banking system. You got your own username and password to type in, when you log in to the portal. So, you have your own identity in the bank system. Once you log in, you can access all your accounts, transfer money, make payments. Bank has granted some privileges to your identity. With your privileges, you cannot look into your neighbor’s bank account. But your bank manager can view your account and your neighbor’s account too. That means the privileges attached to the bank manager’s identity is different. Amount of data which can be retrieved from systems are dependent on the identity privileges. Not only that, some of these identities are integrated with different systems. Industries use different systems related to their operations. It can be email system, CMS or billing system. Each of these systems hold data. To make operations smooth these systems are integrated with one identity infrastructure and provides single sign-on experience instead of using different identities for each and every application. It is making identities more and more powerful within any system. For an attacker, what is more worth? To focus on one system or target on identity which is attached to data and privileges to many different systems? Which one can make more damage? If the identity which is the target, has more privileged access to the systems, its a total disaster. Is it all about usernames, passwords or admin accounts? No it's not, identities can make more damage than that. Usernames and passwords are just making it easy. Just think about the recent world famous cyber-attacks. Back in July 2015, a group called The Impact Team threatened to expose user account information of Ashley Madison dating site, if its parent company Avid Life Media didn't shut down the Ashley Madison and Established Men websites completely. For example, Ashley Madison website hack, is it that the financial value made it more dangerous? It was the identities which made damages to people’s lives. It was just enough to expose the names and make someones life to be humiliated. It ruined families and children lost their parents love and care. It proves it’s not only about permissions attached to an identity, individual identities itself are more important in modern big data phenomenon. It’s only been few months from the USA presidential election and by now we can see how much news it can make with a single tweet. It wasn’t needed to have special privileges to do a tweet, it was the identity which made that tweet important. In other hand if that twitter account got hacked and someone tweeted something fake on behalf of the actual person who owns it, what kind of damage it can make to whole world? In order to do that, does it need to hack the Jack Dorsey’s account? Value  of individual identity is more powerful than twitter CEO. According to following latest reports, it shows that majority of information exposed by identity attacks, are people names, addresses, medical reports, and government identity numbers. Source - Symantec Internet Security Threat Report (2016) The attacks targeted on identities are rising day by day. The following graph shows the number of identities been exposed, compared to the number of incidents. Source - Symantec Internet Security Threat Report (2016) In December 2015, there were only 11 incidents and 195 million identities were exposed. It shows how much damage these types of attacks can make.  Each and every time this kind of attack happens, most common answers from engineers are “Those attacks were so sophisticated”, “It was too complex to identify”, “They were so clever”, “It was zero-day attack”. Is that really true?  Zero-days attacks are based on unknown system bugs, errors to vendors. Latest reports show the average time of explores are less than 7 days and 1 day to release to patch. Source - Symantec Internet Security Threat Report (2016) Microsoft Security Intelligence Report Volume 21 | January through June, 2016 report contains the following figure which explains the complexity of the vulnerabilities. It clearly shows the majority of the vulnerabilities are less complex to exploit. High complexity vulnerabilities are still less than 5% from total vulnerability disclosures. It proves the attackers are still after low hanging fruits. Source: Microsoft Security Intelligence Report Volume 21 | January through June, 2016 Microsoft Active Directory is the leader in identity infrastructure solution provider. With all this constant news about identity breaches, Microsoft Active Directory name also appears. Then people start to question why Microsoft can’t fix it? But if you analyse these problems, it’s obvious that just providing technology rich product is not enough to solve these issues. With each and every new server operating system version, Microsoft releases new Active Directory version. Every time it contains new features to improve the identity infrastructure security. But when I go for the Active Directory released project, I see a majority of engineers not even following the security best practices defined by 10 years’ older Active Directory version. Think about a car race, its categories are usually based on the engine power. It can be 1800cc, 2000cc or more. In the race, most of the time it's the same models and same manufactured cars. If it's same manufacture, and if it's same engine capacity how one can win and the other lose? It’s the car tuning and the driving skills which decide a winner and loser. If Active Directory domain service 2016 can fix all the identity threats that’s really good but giving a product or technology doesn’t seem to be work so far. That’s why we need to change the way we think towards identity infrastructure security. We should not forget we are fighting against human adversaries. The tactics, methods, approaches they use, are changing every day. The products we use, do not have such frequent updates but we can change their ability to execute an attack on infrastructure by understanding fundamentals and use the products, technologies, workflows to prevent it. Before we move into identity theft prevention mechanism let’s look into typical identity infrastructure attack. Microsoft Tiered administration model is based on three tiers. All these identity attacks are starting with gaining some kind of access to the identity infrastructure and then move laterally until they have keys to the kingdom which is domain admin or enterprise administrator credentials. Then they have full ownership of entire identity infrastructure. As the preceding diagram shows that the first step on identity attack, is to get some kind of access to the system. They do not target domain admin or enterprise admin account first. Getting access to a typical user account is much easier than domain admin account. All they need is some kind of beach head. For this, still the most common attack technique is to send out phishing email. It’s typical that someone will still fall for that and click on it. Now they have some sort of access to your identity infrastructure and next step is to start moving laterally to gain more privileges. How many of you completely eliminated local administrator accounts in your infrastructure? I’m sure the answer will be almost none. Sometimes, users are asked for software installations, system level modifications frequently in their systems and most of the time engineers are ending up assigning local administrator privileges. If the compromised account used to be local administrator its becomes extremely easy to move to the next level. If not, they will make systems to misbehave. Then who will come to the rescue? It's the super powered IT help-desk peoples. In lots of organizations, IT help-desk engineers are domain administrators. If not at least local administrators to the systems. So, once they receive the call about a misbehaving computer, they RDP or login locally using the privileged account. If you are using RDP, it always sends your credentials via clear text. If the attacker is running any password harvesting tool it's extremely easy to capture the credentials. You may think if account (which is compromised) is a typical user account how it can execute such programs. But Windows operating systems are not preventing users from running any application on its user context. It will not allow to change any system level settings but it will still allow to run scripts or user level executable. Once they gain access to some identity in organization, the next level of privileges to own will be Tier 1. This is where the application administrators, data administrators, SaaS application administrators accounts live. In today's infrastructures, we have too many administrators. Primarily we have domain admins, enterprise administrators, then we have local administrators. Different applications running on the infrastructure have its own administrators such as exchange administrators, SQL administrators, and SharePoint administrators. The other third-party applications such as CMS, billing portal may have its own administrators. If you are using cloud services, SaaS applications, it has another set of administrators. Are we really aware of activities happening on these accounts? Mostly engineers are only worrying about protecting domain admin accounts, but at the same time forgetting about the other kinds of administrators in the infrastructure. Some of these administrator roles can make more damage than domain admin to a business. These application and services are decentralizing the management in the organization. In order to move latterly with privileges, these attackers only need to log into a machine or server where these administrators used to log in.  Local Security Authority Subsystem Service(LSASS) stores credentials in its memory for active Windows sessions. This prevents users from entering credentials for each and every service they access. This also stores Kerberos tickets. This allows attackers to perform a pass of the hash attack and retrieve locally stored credentials. Decentralized management of admin accounts make this process easier. There are features, security best practices which can be used to prevent the pass of the hash attacks in identity infrastructure.  Another problem with these types of accounts is once it becomes service admin accounts, eventually its becomes domain admin or enterprise administrator accounts. I have seen engineers created service accounts and when they can’t figure out the exact permission required for the program, as an easy fix it will add to the domain admin group. It’s not only the infrastructure attack that can expose such credentials. Service admins are attached to the application too, compromise on application can also expose the identities. In such scenario, it will be easier for attackers to gain keys to the kingdom.  Tier 0 is where the domain admin, enterprise admins operates. This is what the ultimate goal for identity infrastructure attack, once they obtain access to Tier 0, it means they own your entire identity infrastructure. Latest reports show once there is initial breach, it only takes less than 48 hours to gain Tier 0 privileges. According to the reports, once they gain access it will take up to 7-8 months minimum to identify the breach. Because once they have highest privileges they can make backdoors, clean up logs and hide forever if needed. Systems we use, always treat administrators as trustworthy people. It’s no longer valid statement for modern world. How many times you check systems logs to see what your domain admins are doing? Even though engineers look for the logs for other users, majority rarely check about domain admin accounts. The same thing applies for internal security breach too, as I said most people are good but you never know. Most of world famous identity attacks have proved that already. When I have discussion with engineers and customers about identity infrastructure security, following are the common comments I hear, "We have too many administrator accounts" "We do not know how many administrator account we got" "We got fast changing IT teams, so it’s hard to manage permissions" "We do not have visibility over administrator accounts activities" "If there is identity infrastructure breach or attempt, how do we identify?" Answer for all of these is PAM. As I said in the beginning, this is not one product. It’s a workflow and a new way of working. Main components for this process is listed as follows: Apply pass-the-hash prevention features to existing identity infrastructure. Install Microsoft Advanced Threat Analytics to monitor the domain controller traffic to identify potential real-time identity infrastructure threats. Install and configure Microsoft Identity Manager 2016—this product is allowing to manage privilege access of existing Active Directory forest by providing task-based time limited privilege access.  What is it to do with AD DS 2016? AD DS 2016 is now allowing time based group membership which makes this whole process possible. Users will add to the groups with TTL value and once its expires, the user will be removed from the group automatically. For example, let’s assume your CRM application has administrator rights assign to CRM Admin security group. The users in this group only log into the system once a month to do some maintenance. But the admin rights for the members in that group remain untouched for 29 days—24x7. So, it gives enough opportunity for attackers to try and gain access to the privileged accounts during that time. But if it’s admin rights can be limited at least for the day it needed isn’t it more useful? Then we know majority of days in month, CRM application do not have risk of been compromised by an account in CRM Admin group. What is the logic behind PAM? PAM product is built, based on Just-In-Time (JIT) administration concept. Back in 2014, Microsoft release PowerShell tool kit which allows Just-Enough-Administration. Let’s assume you are running a web server in your infrastructure. As part of the operation, every month you need to collect some logs to make a report. You already setup a PowerShell script for it. Someone in your team need to log into the system and need to run it. In order to do that, it requires administration privileges. Using JEA, it is possible to assign required permissions for the user to run only that particular program. In that way, user doesn't need to be added to the domain admin group. User will not be allowed to run any other program with assigned permission and it will not apply for another computer either. JIT administration is bound with time. Users will have required privileges only when they need it. Users will not hold privileged access rights all the time. PAM operations can be divide in to 4 major steps: Source - https://docs.microsoft.com/en-gb/microsoft-identity-manager/pam/privileged-identity-management-for-active-directory-domain-services Prepare: First step is to identify the privileged access groups in your exciting Active Directory forest and start to remove users from those. You may also need to do certain changes in your application infrastructure to support this setup. For example, if you assign privileged access to user accounts instead of security groups (in applications or services) it will need to change. Then next step is to setup equivalent groups in bastion forest without any members. When setup MIM, it will use a bastion forest to manage privileged access in existing Active Directory forest. This is a special forest and it cannot use for other infrastructure operations. This forest running with minimum of Windows Server 2012 R2 Active Directory forest functional level. When identity infrastructure compromised and attackers gain access to Tier 0, they can hide their activities for months or years. How we can be sure our existing identity infrastructure is not compromised already? if we implement this to same forest it will not achieve its core targets. Also, domain upgrades are painful it need time and budget. But because of the bastion forest, this solution can be applied to your existing identity infrastructure with minimum changes.  Protect: Next step is in the list to setup a workflow for authentications and authorization. Define how user can request privileges access when they are required. It can be via MIM portal or existing support portal (with integrated MIM REST API). It is possible to setup system to use Multi-Factor authentications (MFA) during this request process to prevent any unauthorized activity. Also, its important to define how the requests will be handled. It can be automatic approval or manual approval process. Operate: Once privilege access request approved, the user account will be added to the security group in bastion forest. The group itself have a SID value. In both forests, the group will have exact same SID value. Therefore the application or service will not see a difference between two groups in two different forest. Once the permission is granted it is only valid for the time defined by the authorization policy. Once it reaches the time limit, the user account will be removed from the security group automatically. Monitor: PAM provides visibility over the privilege access requests. Each and every request, events will be recorded and it is possible to review and also generate reports for audit purposes. It helps to fine tune the process and also to identify potential threats.  Let’s see how it’s really works: REBELADMIN CORP. uses a CRM system for its operations. The application got administrator role and REBELADMIN/CRMAdmins security group assigned to it. Any member of that group will have administrator privileges to the application. Recently PAM been introduced to the REBELADMIN CORP. As an engineer, I have identified REBELADMIN/CRMAdmins as privileged group and going to protect it using PAM. The first step is to remove the members of the REBELADMIN/CRMAdmins group. After that I have setup same group in the bastion forest. Not only the name is same, but also both the groups got the same SID value 1984.  User Dennis used to be a member of the REBELADMIN/CRMAdmins group and was running monthly report. At the end of the month, he tried to run it and now figured he do not have the required permissions. Next step for him is to request the required permission via MIM Portal. According to the policies, as part of the request, system wants Dennis to use MFA. Once Dennis verifies the PIN number the request logs in the portal. As administrator, I received the alert about the request and I log into system to review the request. It's legitimate request and I approve his access to the system for 8 hours. Then the system automatically added the user account for Dennis into BASTION/CRMAdmins group. This group have the same SID value as the production group. Therefore, the member of BASTION/CRMAdmins group will be treated as administrator by CRM application. This group membership contains TTL value too. After it passes 8 hours from approval, Dennis’s account will be automatically removed from BASTION/CRMAdmins group. In this process, we didn’t add any member to the production security group which is REBELADMIN/CRMAdmins. So, production forest stay untouched and protected. In here the most important thing we need to understand is the legacy approach for identity protection is no longer valid. We are against human adversaries. Identity is our new perimeter in infrastructure and to protect it we need to understand how adversaries doing it and stay step ahead. The new PAM with AD DS 2016 is new approach to the right direction.  Time based group memberships Time based group membership is part of that boarder topic. This allows administrators to assign temporarily group membership which is expressed by Time-To-Live (TTL) value. This value will add to the Kerberos ticket. This is also called as Expiring-Link feature. When a user is assigned to a temporarily group membership, his login Kerberos ticket granting ticket (TGT) life time will be equal to lowest TTL value he has. For example, let’s assume you granted temporarily group membership to user A to be a member of domain admin group. It is only valid for 60 minutes. But user logged in only after 50 minutes from original assign and only have 10 minutes left to be a member of domain admin group. Based on that domain controller will issue TGT only valid for 10 minutes for user A.  This feature is not enabled by default. The reason for that is, to use this feature the forest function level must be Windows Server 2016. Also, once this feature is enabled, it cannot be disabled.  Let’s see how it works in real world: I have Windows domain controller installed and it is running with Windows Server 2016 forest functional level. It can be verified using the following PowerShell command: Get-ADForest | fl Name,ForestMode Then we need to enable the Expiring Link feature. It can be enabled using the following command: Enable-ADOptionalFeature ‘Privileged Access Management Feature’ -Scope ForestOrConfigurationSet -Target rebeladmin.com The rebeladmin.com link can be replaced with your FQDN: I have a user called Adam Curtiss to whom I need to assign Domain Admins group membership for 60 minutes: Get-ADGroupMember “Domain Admins” The preceding command will list the current member of domain admin group:  Next step is to add the user Adam Curtiss to the Domain Admins group for 60 minutes: Add-ADGroupMember -Identity ‘Domain Admins’ -Members ‘acurtiss’ -MemberTimeToLive (New-TimeSpan -Minutes 60)  Once its run, we can verify the TTL value remaining for the group membership using the following command:  Get-ADGroup ‘Domain Admins’ -Property member -ShowMemberTimeToLive Once I log in as the user and list the Kerberos ticket it shows the renew time with less than 60 minutes as I log in as user after few minutes of granting. Once the TGT renewal comes, the user will no longer be a member of Domain Admins group. Summary In this article we looked at the new features and enhancements that come with AD DS 2016. One of the biggest improvement was Microsoft's new approach towards the PAM. This is not just a feature that can be enabled via AD DS, it's just a part of the border solution. It helps to protect identity infrastructures from adversaries as traditional techniques and technologies no longer valid with rising threats. Resources for Article: Further resources on this subject: Deploying and Synchronizing Azure Active Directory [article] How to Recover from an Active Directory Failure [article] Active Directory migration [article]
Read more
  • 0
  • 0
  • 4810

article-image-introduction-titanic-datasets
Packt
09 May 2017
11 min read
Save for later

Introduction to Titanic Datasets

Packt
09 May 2017
11 min read
In this article by Alexis Perrier, author of the book Effective Amazon Machine Learning says artificial intelligence and big data have become a ubiquitous part of our everyday lives; cloud-based machine learning services are part of a rising billion-dollar industry. Among the several such services currently available on the market, Amazon Machine Learning stands out for its simplicity. Amazon Machine Learning was launched in April 2015 with a clear goal of lowering the barrier to predictive analytics by offering a service accessible to companies without the need for highly skilled technical resources. (For more resources related to this topic, see here.) Working with datasets You cannot do predictive analytics without a dataset. Although we are surrounded by data, finding datasets that are adapted to predictive analytics is not always straightforward. In this section, we present some resources that are freely available. The Titanic datasetis a classic introductory datasets for predictive analytics. Finding open datasets There is a multitude of dataset repositories available online, from local to global public institutions to non-profit and data-focused start-ups. Here’s a small list of open dataset resources that are well suited forpredictive analytics. This, by far, is not an exhaustive list. This thread on Quora points to many other interesting data sources:https://www.quora.com/Where-can-I-find-large-datasets-open-to-the-public.You can also ask for specific datasets on Reddit at https://www.reddit.com/r/datasets/. The UCI Machine Learning Repository is a collection of datasets maintained by UC Irvine since 1987, hosting over 300 datasets related to classification, clustering, regression, and other ML tasks Mldata.org from the University of Berlinor the Stanford Large Network Dataset Collection and other major universities alsooffer great collections of open datasets Kdnuggets.com has an extensive list of open datasets at http://www.kdnuggets.com/datasets Data.gov and other US government agencies;data.UN.org and other UN agencies AWS offers open datasets via partners at https://aws.amazon.com/government-education/open-data/. The following startups are data centered and give open access to rich data repositories: Quandl and quantopian for financial datasets Datahub.io, Enigma.com, and Data.world are dataset-sharing sites Datamarket.com is great for time series datasets Kaggle.com, the data science competition website, hosts over 100 very interesting datasets AWS public datasets:AWS hosts a variety of public datasets,such as the Million Song Dataset, the mapping of the Human Genome, the US Census data as well as many others in Astrology, Biology, Math, Economics, and so on. These datasets are mostly available via EBS snapshots although some are directly accessible on S3. The datasets are large, from a few gigabytes to several terabytes, and are not meant to be downloaded on your local machine; they are only to be accessible via an EC2 instance (take a look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-public-data-sets.htmlfor further details).AWS public datasets are accessible at https://aws.amazon.com/public-datasets/. Introducing the Titanic dataset We will use the classic Titanic dataset. The dataconsists of demographic and traveling information for1,309 of the Titanic passengers, and the goal isto predict the survival of these passengers. The full Titanic dataset is available from the Department of Biostatistics at the Vanderbilt University School of Medicine (http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic3.csv)in several formats. The Encyclopedia Titanica website (https://www.encyclopedia-titanica.org/) is the website of reference regarding the Titanic. It contains all the facts, history, and data surrounding the Titanic, including a full list of passengers and crew members. The Titanic datasetis also the subject of the introductory competition on Kaggle.com (https://www.kaggle.com/c/titanic, requires opening an account with Kaggle). You can also find a csv version in GitHub repository at https://github.com/alexperrier/packt-aml/blob/master/ch4. The Titanic data containsa mix of textual, Boolean, continuous, and categorical variables. It exhibits interesting characteristics such as missing values, outliers, and text variables ripe for text mining--a rich database that will allow us to demonstrate data transformations. Here’s a brief summary of the 14attributes: pclass: Passenger class (1 = 1st; 2 = 2nd; 3 = 3rd) survival: A Boolean indicating whether the passenger survived or not (0 = No; 1 = Yes); this is our target name: A field rich in information as it contains title and family names sex: male/female age: Age, asignificant portion of values aremissing sibsp: Number of siblings/spouses aboard parch: Number of parents/children aboard ticket: Ticket number. fare: Passenger fare (British Pound). cabin: Doesthe location of the cabin influence chances of survival? embarked: Port of embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat: Lifeboat, many missing values body: Body Identification Number home.dest: Home/destination Take a look at http://campus.lakeforest.edu/frank/FILES/MLFfiles/Bio150/Titanic/TitanicMETA.pdf for more details on these variables. We have 1,309 records and 14 attributes, three of which we will discard. The home.dest attribute hastoo few existing values, the boat attribute is only present for passengers who have survived, and thebody attributeis only for passengers who have not survived. We will discard these three columnslater on while using the data schema. Preparing the data Now that we have the initial raw dataset, we are going to shuffle it, split it into a training and a held-out subset, and load it to an S3 bucket. Splitting the data In order to build and select the best model, we need to split the dataset into three parts: training, validation, and test, with the usual ratios being 60%, 20%, and 20%. The training and validation sets are used to build several models and select the best one while the test or held-out set, is used for the final performance evaluation on previously unseen data. Since Amazon ML does the job of splitting the dataset used for model training and model evaluation into a training and a validation subsets, we only need to split our initial dataset into two parts: the global training/evaluation subset (80%) for model building and selection, and the held-out subset (20%) for predictions and final model performance evaluation. Shuffle before you split:If you download the original data from the Vanderbilt University website,you will notice that it is ordered by pclass, the class of the passenger and by alphabetical order of the name column. The first 323 rows correspond to the 1st class followed by 2nd (277) and 3rd (709) class passengers. It is important to shuffle the data before you split it so that all the different variables have have similar distributions in each training and held-out subsets. You can shuffle the data directly in the spreadsheet by creating a new column, generating a random number for each row and then ordering by that column. On GitHub: You will find an already shuffledtitanic.csv file at https://github.com/alexperrier/packt-aml/blob/master/ch4/titanic.csv. In addition to shuffling the data, we have removed punctuation in the name column: commas, quotes, and parenthesis, which can add confusion when parsing a csv file. We end up with two files:titanic_train.csv with 1047 rows and titanic_heldout.csv with 263rows. These files are also available in the GitHub repo (https://github.com/alexperrier/packt-aml/blob/master/ch4). The next step is to upload these files on S3 so that Amazon ML can access them. Loading data on S3 AWS S3 is one of the main AWS services dedicated to hosting files and managing their access. Files in S3 can be public and open to the internet or have access restricted to specific users, roles, or services.S3 is also used extensively by AWS for operations such as storing log files or results (predictions, scripts, queries, and so on). Files in S3 are organized around the notion of buckets. Buckets are placeholders with unique names similar to domain names for websites. A file in S3 will have a unique locator URI: s3://bucket_name/{path_of_folders}/filename. The bucket name is unique across S3. In this section, we will create a bucket for our data, upload the titanic training file, and open its access to Amazon ML. Go to https://console.aws.amazon.com/s3/home, and open an S3 account if you don’t have one yet. S3 pricing:S3 charges for the total volume of files you host and the volume of file transfers depends on the region where the files are hosted. At time of writing, for less than 1TB, AWS S3 charges $0.03/GB per month in the US east region. All S3 prices are available at https://aws.amazon.com/s3/pricing/. See also http://calculator.s3.amazonaws.com/index.htmlfor the AWS cost calculator. Creating a bucket Once you have created your S3 account, the next step is to create a bucket for your files.Click on the Create bucket button: Choose a name and a region, since bucket names are unique across S3, you must choose a name for your bucket that has not been already taken. We chose the name aml.packt for our bucket, and we will use this bucket throughout. Regarding the region, you should always select a region that is the closest to the person or application accessing the files in order to reduce latency and prices. Set Versioning, Logging, and Tags, versioning will keep a copy of every version of your files, which prevents from accidental deletions. Since versioning and logging induce extra costs, we chose to disable them. Set permissions. Review and save. Loading the data To upload the data, simply click on the upload button and select the titanic_train.csv file we created earlier on. You should, at this point, have the training dataset uploaded to your AWS S3 bucket. We added a/data folder in our aml.packt bucket to compartmentalize our objects. It will be useful later on when the bucket will also contain folders created by S3. At this point, only the owner of the bucket (you) is able to access and modify its contents. We need to grant the Amazon ML service permissions to read the data and add other files to the bucket. When creating the Amazon ML datasource, we will be prompted to grant these permissions inthe Amazon ML console. We can also modify the bucket’s policy upfront. Granting permissions We need to edit the policy of the aml.packt bucket. To do so, we have to perform the following steps: Click into your bucket. Select the Permissions tab. In the drop down, select Bucket Policy as shown in the following screenshot. This will open an editor: Paste in the following JSON. Make sure to replace {YOUR_BUCKET_NAME} with the name of your bucket and save: { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “AmazonML_s3:ListBucket”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:ListBucket”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}”, “Condition”: { “StringLike”: { “s3:prefix”: “*” } } }, { “Sid”: “AmazonML_s3:GetObject”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:GetObject”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}/*” }, { “Sid”: “AmazonML_s3:PutObject”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:PutObject”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}/*” } ] } Further details on this policy are available at http://docs.aws.amazon.com/machine-learning/latest/dg/granting-amazon-ml-permissions-to-read-your-data-from-amazon-s3.html. Once again, this step is optional since Amazon ML will prompt you for access to the bucket when you create the datasource. Formatting the data Amazon ML works on comma separated values files (.csv)--a very simple format where each rowis an observation and each column is a variable or attribute. There are, however, a few conditionsthat shouldbe met: The data must be encoded in plain text using a character set, such asASCII, Unicode, or EBCDIC All values must be separated by commas; if a value contains a comma, it should be enclosed by double quotes Each observation (row) must be smaller than 100k There are also conditions regarding end of line characters that separate rows. Special care must be taken when using Excel on OS X (Mac) as explained on this page: http://docs.aws.amazon.com/machine-learning/latest/dg/understanding-the-data-format-for-amazon-ml.html What about other data file formats? Unfortunately, Amazon ML datasource are only compatible with csv files and Redshift databases and does not accept formats such as JSON, TSV, or XML. However, other services such as Athena, a serverless database service, do accept a wider range of formats. Summary In this article we learnt about how to use and work around with datasets using Amazon web services and Titanic datasets. We also learnt how prepare data and Amazon S3 services.  Resources for Article: Further resources on this subject: Processing Massive Datasets with Parallel Streams – the MapReduce Model [article] Processing Next-generation Sequencing Datasets Using Python [article] Combining Vector and Raster Datasets [article]
Read more
  • 0
  • 1
  • 17111

article-image-building-strong-foundation
Packt
04 May 2017
28 min read
Save for later

Building a Strong Foundation

Packt
04 May 2017
28 min read
In this article, by Mickey Macdonald, author of the book Mastering C++ Game Development, we will cover how these libraries can work together and build some of the libraries needed to round out the structure. (For more resources related to this topic, see here.) To get started, we will focus on, arguably one of the most important aspects of any game project, the rendering system. Proper, performant implementations not only takes a significant amount of time, but it also takes specialized knowledge of video driver implementations and mathematics for computer graphics. Having said that, it is not, in fact, impossible to create a custom low-level graphics library yourself, it's just not overly recommended if your end goal is just to make video games. So instead of creating a low-level implementation themselves, most developers turn to a few different libraries to provide them abstracted access to the bare metal of the graphics device. We will be using a few different graphic APIs to help speed up the process and help provide coherence across platforms. These APIs include the following: OpenGL (https://www.opengl.org/): The Open Graphics Library (OpenGL) is an open cross-language, cross-platform application programming interface, or API, used for rendering 2D and 3D graphics. The API provides low-level access to the graphics processing unit (GPU). SDL (https://www.libsdl.org/): Simple DirectMedia Layer (SDL) is a cross-platform software development library designed to deliver a low-level hardware abstraction layer to multimedia hardware components. While it does provide its own mechanism for rendering, SDL can use OpenGL to provide full 3D rendering support. While these APIs save us time and effort by providing us some abstraction when working with the graphics hardware, it will quickly become apparent that the level of abstraction will not be high enough. You will need another layer of abstraction to create an efficient way of reusing these APIs in multiple projects. This is where the helper and manager classes come in. These classes will provide the needed structure and abstraction for us and other coders. They will wrap all the common code needed to set up and initialize the libraries and hardware. The code that is required by any project regardless of gameplay or genre can be encapsulated in these classes and will become part of the "engine." In this article, we will cover the following topics: Building helper classes Encapsulation with managers Creating interfaces Building helper classes In object-oriented programming, a helper class is used to assist in providing some functionality, which is not, directly the main goal of the application in which it is used. Helper classes come in many forms and are often a catch-all term for classes that provide functionality outside of the current scope of a method or class. Many different programming patterns make use of helper classes. In our examples, we too will make heavy use of helper classes. Here is just one example. Let's take a look at the very common set of steps used to create a Window. It's safe to say that most of the games you will create will have some sort of display and will generally be typical across different targets, in our case Windows and the macOS. Having to retype the same instructions constantly over and over for each new project seems like kind of a waste. That sort of situation is perfect for abstracting away in a helper class that will eventually become part of the engine itself. The code below is the header for the Window class included in the demo code examples. To start, we have a few necessary includes, SDL, glew which is a Window creation helper library, and lastly, the standard string class is included: #pragma once #include <SDL/SDL.h> #include <GL/glew.h> #include <string> Next, we have an enum WindowFlags. We use this for setting some bitwise operations to change the way the window will be displayed; invisible, full screen, or borderless. You will notice that I have wrapped the code in the namespace BookEngine, this is essential for keeping naming conflicts from happening and will be very helpful once we start importing our engine into projects: namespace BookEngine { enum WindowFlags //Used for bitwise passing { INVISIBLE = 0x1, FULLSCREEN = 0x2, BORDERLESS = 0x4 }; Now we have the Window class itself. We have a few public methods in this class. First the default constructor and destructor. It is a good idea to include a default constructor and destructor even if they are empty, as shown here, despite the compiler, including its own, these specified ones are needed if you plan on creating intelligent or managed pointers, such as unique_ptr, of the class: class Window { public: Window(); ~Window(); Next we have the Create function, this function will be the one that builds or creates the window. It takes a few arguments for the creation of the window such as the name of the window, screen width and height, and any flags we want to set, see the previously mentioned enum. void Create(std::string windowName, int screenWidth, int screenHeight, unsigned int currentFlags); Then we have two getter functions. These functions will just return the width and height respectively: int GetScreenWidth() { return m_screenWidth; } int GetScreenHeight() { return m_screenHeight; } The last public function is the SwapBuffer function; this is an important function that we will take a look at in more depth shortly. void SwapBuffer(); To close out the class definition, we have a few private variables. The first is a pointer to a SDL_Window* type, named appropriate enough m_SDL_Window. Then we have two holder variables to store the width and height of our screen. This takes care of the definition of the new Window class, and as you can see it is pretty simple on face value. It provides easy access to the creation of the Window without the developer calling it having to know the exact details of the implementation, which is one aspect that makes Object Orientated Programming and this method is so powerful: private: SDL_Window* m_SDL_Window; int m_screenWidth; int m_screenHeight; }; } To get a real sense of the abstraction, let's walk through the implementation of the Window class and really see all the pieces it takes to create the window itself. #include "Window.h" #include "Exception.h" #include "Logger.h" namespace BookEngine { Window::Window() { } Window::~Window() { } The Window.cpp files starts out with the need includes, of course, we need to include Window.h, but you will also note we need to include the Exception.h and Logger.h header files also. These are two other helper files created to abstract their own processes. The Exception.h file is a helper class that provides an easy-to-use exception handling system. The Logger.h file is a helper class that as its name says, provides an easy-to-use logging system. After the includes, we again wrap the code in the BookEngine namespace and provide the empty constructor and destructor for the class. The Create function is the first to be implemented. In this function are the steps needed to create the actual window. It starts out setting the window display flags using a series of if statements to create a bitwise representation of the options for the window. We use the enum we created before to make this easier to read for us humans. void Window::Create(std::string windowName, int screenWidth, int screenHeight, unsigned int currentFlags) { Uint32 flags = SDL_WINDOW_OPENGL; if (currentFlags & INVISIBLE) { flags |= SDL_WINDOW_HIDDEN; } if (currentFlags & FULLSCREEN) { flags |= SDL_WINDOW_FULLSCREEN_DESKTOP; } if (currentFlags & BORDERLESS) { flags |= SDL_WINDOW_BORDERLESS; } After we set the window's display options, we move on to using the SDL library to create the window. As I mentioned before, we use libraries such as SDL to help us ease the creation of such structures. We start out wrapping these function calls in a Try statement; this will allow us to catch any issues and pass it along to our Exception class as we will see soon: try { //Open an SDL window m_SDL_Window = SDL_CreateWindow(windowName.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, flags); The first line sets the private member variable m_SDL_Window to a newly created window using the passed in variables, for the name, width, height, and any flags. We also set the default window's spawn point to the screen center by passing the SDL_WINDOWPOS_CENTERED define to the function. if (m_SDL_Window == nullptr) throw Exception("SDL Window could not be created!"); After we have attempted to create the window, it is a good idea to check and see if the process did succeed. We do this with a simple if statement and check to see if the variable m_SDL_Window is set to a nullptr; if it is, we throw an Exception. We pass the Exception the string "SDL Window could not be created!". This is the error message that we can then print out in a catch statement. Later on, we will see an example of this. Using this method, we provide ourselves some simple error checking. Once we have created our window and have done some error checking, we can move on to setting up a few other components. One of these components is the OpenGL library which requires what is referred to as a context to be set. An OpenGL context can be thought of as a set of states that describes all the details related to the rendering of the application. The OpenGLcontext must be set before any drawing can be done. One problem is that creating a window and an OpenGL context is not part of the OpenGL specification itself. What this means is that every platform can handle this differently. Luckily for us, the SDL API again abstracts the heavy lifting for us and allows us to do this all in one line of code. We create a SDL_GLContext variable named glContext. We then assign glContext to the return value of the SDL_GL_CreateContext function that takes one argument, the SDL_Window we created earlier. After this we, of course, do a simple check to make sure everything worked as intended, just like we did earlier with the window creation: //Set up our OpenGL context SDL_GLContext glContext = SDL_GL_CreateContext(m_SDL_Window); if (glContext == nullptr) throw Exception("SDL_GL context could not be created!"); The next component we need to initialize is GLEW. Again this is abstracted for us to one simple command, glewInit(). This function takes no arguments but does return an error status code. We can use this status code to perform a similar error check like we did with the window and OpenGL. This time instead checking it against the defined GLEW_OK. If it evaluates to anything other than GLEW_OK, we throw an Exception to be caught later on. //Set up GLEW (optional) GLenum error = glewInit(); if (error != GLEW_OK) throw Exception("Could not initialize glew!"); Now that the needed components are initialized, now is a good time to log some information about the device running the application. You can log all kinds of data about the device which can provide valuable insights when trying to track down obscure issues. In this case, I am polling the system for the version of OpenGL that is running the application and then using the Logger helper class printing this out to a "Runtime" text file: //print some log info std::string versionNumber = (const char*)glGetString(GL_VERSION); WriteLog(LogType::RUN, "*** OpenGL Version: " + versionNumber + "***"); Now we set the clear color or the color that will be used to refresh the graphics card. In this case, it will be the background color of our application. The glClearColor function takes four float values that represent the red, green, blue, and alpha values in a range of 0.0 to 1.0. Alpha is the transparency value where 1.0f is opaque, and 0.0f is completely transparent. //Set the background color to blue glClearColor(0.0f, 0.0f, 1.0f, 1.0f); The next line sets the VSYNC value, which is a mechanism that will attempt to match the application's framerate to that of the physical display. The SDL_GL_SetSwapInterval function takes one argument, an integer that can be 1 for on or 0 for off. //Enable VSYNC SDL_GL_SetSwapInterval(1); The last two lines that make up the try statement block, enable blending and set the method used when performing alpha blending. For more information on these specific functions, check out the OpenGL development documents: //Enable alpha blend glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); } After our try block, we now have to include the catch block or blocks. This is where we will capture any of the thrown errors that have occurred. In our case, we are just going to grab all the Exceptions. We use the WriteLog function from the Logger helper class to add the exception message, e.reason to the error log text file. This is a very basic case, but of course, we could do more here, possibly even recover from an error if possible. catch (Exception e) { //Write Log WriteLog(LogType::ERROR, e.reason); } } Finally, the last function in the Window.cpp file is the SwapBuffer function. Without going too deep on the implementation, what swapping buffers does is exchange the front and back buffers of the GPU. This in a nutshell allows smoother drawing to the screen. It is a complicated process that again has been abstracted by the SDL library. Our SwapBuffer function, abstracts this process again so that when we want to swap the buffers we simply call SwapBuffer instead of having to call the SDL function and specify the window, which is what is exactly done in the function. void Window::SwapBuffer() { SDL_GL_SwapWindow(m_SDL_Window); } } So as you can see, building up these helper functions can go a long way in making the process of development and iteration much quicker and simpler. Next, we will look at another programming method that again abstracts the heavy lifting from the developer's hands and provides a form of control over the process, a management system. Encapsulation with managers When working with complex systems such as input and audio systems, it can easily become tedious and unwieldy to control and check each state and other internals of the system directly. This is where the idea of the "Manager" programming pattern comes in. Using abstraction and polymorphism we can create classes that allow us to modularize and simplify the interaction with these systems. Manager classes can be found in many different use cases. Essentially if you see a need to have structured control over a certain system, this could be a candidate for a Manger class. Stepping away from the rendering system for a second, let's take a look at a very common task that any game will need to perform, handling input. Since every game needs some form of input, it only makes sense to move the code that handles this to a class that we can use over and over again. Let's take a look at the InputManager class, starting with the header file: #pragma once #include <unordered_map> #include <glm/glm.hpp> namespace BookEngine { class InputManager { public: InputManager(); ~InputManager(); The InputManager class starts just like the others, we have the includes needed and again we wrap the class in the BookEngine namespace for convince and safety. The standard constructor and destructor are also defined. Next, we have a few more public functions. First the Update function, which will not surprisingly update the input system. Then we have the KeyPressed and KeyReleased functions, these functions both take an integer value corresponding to a keyboard key. The following functions fire off when the key is pressed or released respectively. void Update(); void KeyPress(unsigned int keyID); void KeyRelease(unsigned int keyID); After the KeyPress and KeyRelease functions, we have two more key related functions the isKeyDown and isKeyPressed . Like the KeyPress and KeyRelease functions the isKeyDown and isKeyPressed functions take integer values that correspond to keyboard keys. The noticeable difference is that these functions return a Boolean value based on the status of the key. We will see more about this in the implementation file coming up. bool isKeyDown(unsigned int keyID); //Returns true if key is held bool isKeyPressed(unsigned int keyID); //Returns true if key was pressed this update The last two public functions in the InputManager class are SetMouseCoords and GetMouseCoords which do exactly as the names suggest and set or get the mouse coordinates respectively. void SetMouseCoords(float x, float y); glm::vec2 GetMouseCoords() const { return m_mouseCoords; }; Moving on to the private members and functions, we have a few variables declared to store some information about the keys and mouse. First, we have a Boolean value that stores the state of the key being pressed down or not. Next, we have two unordered maps that will store the current keymap and previous keymaps. The last value we store is the mouse coordinates. We us a vec2 construct from another helper library the Graphic Library Math library or GLM. We use this vec2, which is just a two-dimensional vector, to store the x and y coordinate values of the mouse cursor since it is on a 2D plane, the screen. If you are looking for a refresher on vectors and the Cartesian coordinate system, I highly recommend the Beginning Math Concepts for Game Developers book by Dr. John P Flynt: private: bool WasKeyDown(unsigned int keyID); std::unordered_map<unsigned int, bool> m_keyMap; std::unordered_map<unsigned int, bool> m_previousKeyMap; glm::vec2 m_mouseCoords; }; } Now let's look at the implementation, the InputManager.cpp file. Again we start out with the includes and the namespace wrapper. Then we have the constructor and destructor. The highlight to note here is the setting of the m_mouseCoords to 0.0f in the constructor: namespace BookEngine { InputManager::InputManager() : m_mouseCoords(0.0f) { } InputManager::~InputManager() { } Next is the Update function. This is a simple update where we are stepping through each key in the key map and copying it over to the previous key map holder: m_previousKeyMap. void InputManager::Update() { for (auto& iter : m_keyMap) { m_previousKeyMap[iter.first] = iter.second; } } The next function is the KeyPress function. In this function, we use the trick of an associative array to test and insert the key pressed whichmatches the ID passed in. The trick is that if the item located at the index of the keyID index does not exist, it will automatically be created: void InputManager::KeyPress(unsigned int keyID) { m_keyMap[keyID] = true; } . We do the same for the KeyRelease function below. void InputManager::KeyRelease(unsigned int keyID) { m_keyMap[keyID] = false; } The KeyRelease function is the same setup as the KeyPressed function, except that we are setting the keyMap item at the keyID index to false. bool InputManager::isKeyDown(unsigned int keyID) { auto key = m_keyMap.find(keyID); if (key != m_keyMap.end()) return key->second; // Found the key return false; } After the KeyPress and KeyRelease functions, we implement the isKeyDown and isKeyPressed functions. First the isKeydown function; here we want to test if a key is already pressed down. In this case, we take a different approach to testing the key than in the KeyPress and KeyRelease functions and avoid the associative array trick. This is because we don't want to create a key if it does not already exist, so instead, we do it manually: bool InputManager::isKeyPressed(unsigned int keyID) { if(isKeyDown(keyID) && !m_wasKeyDown(keyID)) { return true; } return false; } The isKeyPressed function is quite simple. Here we test to see if the key that matches the passed in ID is pressed down, by using the isKeyDown function, and that it was not already pressed down by also passing the ID to m_wasKeyDown. If both of these conditions are met, we return true, or else we return false. Next, we have the WasKeyDown function, much like the isKeyDown function, we do a manual lookup to avoid accidentally creating the object using the associative array trick: bool InputManager::WasKeyDown(unsigned int keyID) { auto key = m_previousKeyMap.find(keyID); if (key != m_previousKeyMap.end()) return key->second; // Found the key return false; } The final function in the InputManger is SetMouseCoords. This is a very simple "setter" function that takes the passed in floats and assigns them to the x and y members of the two-dimensional vector, m_mouseCoords. void InputManager::SetMouseCoords(float x, float y) { m_mouseCoords.x = x; m_mouseCoords.y = y; } } Creating interfaces Sometimes you are faced with a situation where you need to describe capabilities and provide access to general behaviors of a class without committing to a particular implementation. This is where the idea of interfaces or abstract classes comes into play. Using interfaces provides a simple base class that other classes can then inherit from without having to worry about the intrinsic details. Building strong interfaces can enable rapid development by providing a standard class to interact with. While interfaces could, in theory, be created of any class, it is more common to see them used in situations where the code is commonly being reused. Let's take a look at an interface from the example code in the repository. This interface will provide access to the core components of the Game. I have named this class IGame, using the prefix I to identify this class as an interface. The following is the implementation beginning with the definition file IGame.h. To begin with, we have the needed includes and the namespace wrapper. You will notice that the files we are including are some of the ones we just created. This is a prime example of the continuation of the abstraction. We use these building blocks to continue to build the structure that will allow this seamless abstraction: #pragma once #include <memory> #include "BookEngine.h" #include "Window.h" #include "InputManager.h" #include "ScreenList.h" namespace BookEngine { Next, we have a forward declaration. This declaration is for another interface that has been created for screens. The full source code to this interface and its supporting helper classes are available in the code repository. class IScreen;using forward declarations like this is a common practice in C++. If the definition file only requires the simple definition of a class, not adding the header for that class will speed up compile times. Moving onto the public members and functions, we start off the constructor and destructor. You will notice that this destructor in this case is virtual. We are setting the destructor as virtual to allow us to call delete on the instance of the derived class through a pointer. This is handy when we want our interface to handle some of the cleanup directly as well. class IGame { public: IGame(); virtual ~IGame(); Next we have declarations for the Run function and the ExitGame function. void Run(); void ExitGame(); We then have some pure virtual functions, OnInit, OnExit, and AddScreens. Pure virtual functions are functions that must be overridden by the inheriting class. By adding the =0; to the end of the definition, we are telling the compiler that these functions are purely virtual. When designing your interfaces, it is important to be cautious when defining what functions must be overridden. It's also very important to note that having pure virtual function implicitly makes the class it is defined for abstract. Abstract classes cannot be instantiated directly because of this and any derived classes need to implement all inherited pure virtual functions. If they do not, they too will become abstract. virtual void OnInit() = 0; virtual void OnExit() = 0; virtual void AddScreens() = 0; After our pure virtual function declarations, we have a function OnSDLEvent which we use to hook into the SDL event system. This provides us support for our input and other event-driven systems: void OnSDLEvent(SDL_Event& event); The public function in the IGame interface class is a simple helper function GetFPS that returns the current FPS. Notice the const modifiers, they identify quickly that this function will not modify the variable's value in any way: declarations const float GetFPS() const { return m_fps; } In our protected space, we start with a few function declarations. First is the Init or initialization function. This will be the function that handles a good portion of the setup. Then we have two virtual functions Update and Draw. Like pure virtual functions, a virtual function is a function that can be overridden by a derived class's implementation. Unlike a pure virtual function, the virtual function does not make the class abstract by default and does not have to be overridden. Virtual and pure virtual functions are keystones of polymorphic design. You will quickly see their benefits as you continue your development journey: protected: bool Init(); virtual void Update(); virtual void Draw(); To close out the IGame definition file, we have a few members to house different objects and values. I am not going to go through these line by line since I feel they are pretty self-explanatory: declarations std::unique_ptr<ScreenList> m_screenList = nullptr; IGameScreen* m_currentScreen = nullptr; Window m_window; InputManager m_inputManager; bool m_isRunning = false; float m_fps = 0.0f; }; } Now that we have taken a look at the definition of our interface class, let's quickly walk through the implementation. The following is the IGame.cpp file. To save time and space, I am going to highlight the key points. For the most part, the code is self-explanatory, and the source located in the repository is well commented for more clarity: #include "IGame.h" #include "IScreen.h" #include "ScreenList.h" #include "Timing.h" namespace BookEngine { IGame::IGame() { m_screenList = std::make_unique<ScreenList>(this); } IGame::~IGame() { } Our implementation starts out with the constructor and destructor. The constructor is simple, its only job is to add a unique pointer of a new screen using this IGame object as the argument to pass in. See the IScreen class for more information on screen creation. Next, we have the implementation of the Run function. This function, when called will set the engine in motion. Inside the function, we do a quick check to make sure we have already initialized our object. We then use yet another helper class, FPSlimiter, to set the max fps that our game can run. After that, we set the isRunning boolean value to true, which we then use to control the game loop: void IGame::Run() { if (!Init()) return; FPSLimiter fpsLimiter; fpsLimiter.SetMaxFPS(60.0f); m_isRunning = true; Next is the game loop. In the game loop, we do a few simple calls. First, we start the fps limiter. We then call the update function on our input manager. It is a good idea always to check input before doing other updates or drawing since their calculations are sure to use the new input values. After we update the input manager, we recursively call our Update and Draw class, which we will see shortly. We close out the loop by ending the fpsLimiter function and calling SwapBuffer on the Window object. ///Game Loop while (m_isRunning) { fpsLimiter.Begin(); m_inputManager.Update(); Update(); Draw(); m_fps = fpsLimiter.End(); m_window.SwapBuffer(); } } The next function we implement is the ExitGame function. Ultimately, this will be the function that will be called on the final exit of the game. We close out, destroy, and free up any memory that the screen list has created and set the isRunning Boolean to false, which will put an end to the loop: void IGame::ExitGame() { m_currentScreen->OnExit(); if (m_screenList) { m_screenList->Destroy(); m_screenList.reset(); //Free memory } m_isRunning = false; } Next up is the Init function. This function will initialize all the internal object settings and call the initialization on the connected systems. Again, this is an excellent example of OOP or object orientated programming and polymorphism. Handling initialization in this manner allows the cascading effect, keeping the code modular and easier to modify: bool IMainGame::Init() { BookEngine::Init(); SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); m_window.Create("BookEngine", 1024, 780, 0); OnInit(); AddScreens(); m_currentScreen = m_screenList->GetCurrentScreen(); m_currentScreen->OnEntry(); m_currentScreen->Run(); return true; } Next, we have the update function. In this Update function, we create a structure to allow us to execute certain code based on a state that the current screen is in. We accomplish this using a simple Switch case method with the enumerated elements of the ScreenState type as the cases. This setup is considered a simple finite state machine and is a very powerful design method used throughout game development: void IMainGame::Update() { if (m_currentScreen) { switch (m_currentScreen->GetScreenState()) { case ScreenState::RUNNING: m_currentScreen->Update(); break; case ScreenState::CHANGE_NEXT: m_currentScreen->OnExit(); m_currentScreen = m_screenList->MoveToNextScreen(); if (m_currentScreen) { m_currentScreen->Run(); m_currentScreen->OnEntry(); } break; case ScreenState::CHANGE_PREVIOUS: m_currentScreen->OnExit(); m_currentScreen = m_screenList->MoveToPreviousScreen(); if (m_currentScreen) { m_currentScreen->Run(); m_currentScreen->OnEntry(); } break; case ScreenState::EXIT_APP: ExitGame(); break; default: break; } } else { //we have no screen so exit ExitGame(); } } After our Update, we implement the Draw function. In our function, we only do a couple of things. First, we reset the Viewport as a simple safety check, then if the current screen's state matches the enumerated value RUNNING, we again use polymorphism to pass the Draw call down the object line: void IGame::Draw() { //For safety glViewport(0, 0, m_window.GetScreenWidth(), m_window.GetScreenHeight()); //Check if we have a screen and that the screen is running if (m_currentScreen && m_currentScreen->GetScreenState() == ScreenState::RUNNING) { m_currentScreen->Draw(); } } The last function we need to implement is the OnSDLEvent function. Like I mention in the definition section of this class, we will use this function to connect our input manger system to the SDL built in event system. Every key press or mouse movement is handled as an event. Based on the type of event that has occurred, we again use a Switch case statement to create a simple finite state machine. void IGame::OnSDLEvent(SDL_Event & event) { switch (event.type) { case SDL_QUIT: m_isRunning = false; break; case SDL_MOUSEMOTION: m_inputManager.SetMouseCoords((float)event.motion.x, (float)event.motion.y); break; case SDL_KEYDOWN: m_inputManager.KeyPress(event.key.keysym.sym); break; case SDL_KEYUP: m_inputManager.KeyRelease(event.key.keysym.sym); break; case SDL_MOUSEBUTTONDOWN: m_inputManager.KeyPress(event.button.button); break; case SDL_MOUSEBUTTONUP: m_inputManager.KeyRelease(event.button.button); break; } } } Well, that takes care of the IGame interface. With this created, we can now create a new project that can utilize this and other interfaces in the example engine to create a game and initialize it all with just a few lines of code: #pragma once #include <BookEngine/IMainGame.h> #include "GamePlayScreen.h" class App : public BookEngine::IGame { public: App(); ~App(); virtual void OnInit() override; virtual void OnExit() override; virtual void AddScreens() override; private: std::unique_ptr<GameplayScreen> m_gameplayScreen = nullptr; }; The highlights to note here are that, one, the App class inherits from the BookEngine::IGame interface and two, we have all the necessary overrides that the inherited class requires. Next, if we take a look at the main.cpp file, the entry point for our application, you will see the simple commands to set up and kick off all the amazing things our interfaces, managers, and helpers abstract for us: #include <BookEngine/IMainGame.h> #include "App.h" int main(int argc, char** argv) { App app; app.Run(); return 0; } As you can see, this is far simpler to type out every time we want to create a new project than having to recreate the framework constantly from scratch. To see the output of the framework, build the BookEngine project, then build and run the example project. On Windows, the example project when run will look like the following: On macOS, the example project when run will look like the following: Summary In this article, we covered quite a bit. We took a look at the different methods of using object oriented programming and polymorphism to create a reusable structure for all your game projects. We walked through the differences in Helper, Managers, and Interfaces classes with examples from real code. Resources for Article: Further resources on this subject: Game Development Using C++ [article] C++, SFML, Visual Studio, and Starting the first game [article] Common Game Programming Patterns [article]
Read more
  • 0
  • 0
  • 2045

article-image-deploying-first-container
Packt
11 Apr 2017
11 min read
Save for later

Deploying First Container

Packt
11 Apr 2017
11 min read
In this article by Srikant Machiraju, author of the book Learning Windows Server Containers, we will get acquainted with containers and containerization. Containerization helps you build software in layers, containers inspire distributed development, packaging, and publishing in the form of containers. Developers or IT administrators just have to choose a BaseOS Image, create customized layers as per their requirements, and distribute using Public or Private Repositories.Microsoft and Docker together have provided an amazing toolset that helps you build and deploy containers within no time. It is very easy to setup a dev/test environment as well. Microsoft Windows Server Operating System or Windows 10 Desktop OS comes with plug and play features for running Windows Server containers or Hyper-V Containers.Docker Hub,a public repository for images,serves as a huge catalogue of customized images built by community or docker enthusiasts. The images on DockerHub are freely available for anyone to download, customize,and distribute images. In this article, we will learn how to create and configure container development environments. The following are a few more concepts that you will learn in this article: Preparing Windows Server Containers Environment Pulling images from Docker Hub Installing Base OS Images (For more resources related to this topic, see here.) Preparing Development Environment In order to start creating Windows Containers,you need an instance of Windows Server 2016 or Windows 10 Enterprise/Professional Edition (with Anniversary Update). Irrespective of the environment, the PowerShell/Docker commands described in thisarticlefor creating and packaging containers/imagesare the same.The following are the options we have to setup a windows server container development environment: Windows 10: Using Windows 10 Enterprise or Professional Edition with Anniversary update you can create Hyper-V Containers by enabling the containers role. Docker or PowerShell can be used to manage the containers. We will learn how to configure the Windows 10 environment in the following section. Important: Windows 10 only supports Hyper-V Containers created using NanoServer Base OS Image; it does not support Windows Server Containers. Windows Server 2016: There are two options for working with containers on Windows Server 2016: You can download the Windows Server 2016 ISO from here (https://www.microsoft.com/en-in/evalcenter/evaluate-windows-server-technical-preview) and install it on a virtual machine running on Hyper-V or Virtual Box.For running Windows Server 2016 the host machine should have Hyper-V virtualization enabled.Additionally, for the containers to access theInternet, ensure that thenetwork is sharable between the host and Hyper-VVMs. Windows Azure provides a readymade instance of Windows Server 2016 with Containers configured. This so far is the easiest option available. In this article,I will be using Windows Server 2016 with Containers enabled on Azure to create and manageWindow Server Containers. Windows Server 2016 is still in preview and the latest version available at the time of writing is Technical Preview 5. Containerson Windows 10 The following steps explain how to setup a dev/test environment on Windows 10 for learning container development using Hyper-V Containers. Before continuing further, ensure thatyou're running Windows 10 Professional/Enterprise version with anniversary update. For validating Windows Edition on Windows 10,click Start and type This PC, andthen right-click on This PC and click on Properties. Check the Windows Edition section for the Windows 10 edition. If your PC shows Windows 10 Enterprise or Professional, you can download and install the Anniversary update from here (https://support.microsoft.com/en-us/help/12387/windows-10-update-history). If you do not have any of the above, please proceed to the following section, which explains how to work with containers using Windows Server 2016 environment on Azure and on-premises. Follow these steps to configure Hyper-V containers on Windows 10: Click on Start Menu and type powershell. Right-click powershell CLI (Command Line Interface) and Run as administrator. Run the following command to install the containers feature on Windows 10: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All Run the following command to install the Hyper-V containers feature on Windows 10. We will only be using Windows Server Containers in this article: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All Restart the PC by running the following command: Restart-Computer –Force Run the following command to update the registry settings: Set-ItemProperty -Path 'HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionVirtualizationContainers' -Name VSmbDisableOplocks -Type DWord -Value 1 -Force Although PowerShell can be used to manage and run containers, Docker commands give a full wealth of options for container management. Microsoft PowerShell support for Windows Container development is still a work in progress, so we will mix and match PowerShell and Docker as per the scenarios. Run the following set of steps one by one to install and configure Docker on Windows 10: Invoke-WebRequest"https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile"$env:TEMPdocker-1.13.0-dev.zip" –UseBasicParsing Expand-Archive -Path "$env:TEMPdocker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles $env:path += ";c:program filesdocker" [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:Program FilesDocker", [EnvironmentVariableTarget]::Machine) dockerd --register-service Start the Docker Service by running the following command: Start-Service docker In order to develop windows server containers, we need any Windows Base OS Image,such as windowsservercore or nanoserver. Since Windows 10 supports nanoservers only, run the following command to download thenanoservercorebase OS image. The following command might take some time depending on your bandwidth; it downloadsand extracts thenanoserver base OS image, which is 970 MB in size approximately: docker pull microsoft/nanoserver Important: At the time of writing, Windows 10 can only run Nano Server Images. Even though the windowsservercore image gets download successfully, running containers using this image will fail due to incompatibility with the OS. Ensure that the images are successfully downloaded by running the following command: docker images Windows Server Containers On-Premise This section will help you to download and install Windows Server 2016 on a Virtual Machine using a hosted virtualization software such as Hyper-V or Virtual Box. Windows Server 2016 comes with two installation options, Windows Server 2016 Core and Windows Server 2016 Full feature, as shown in the following screenshot: Windows Server 2016 Core is a No GUI version of Windows with minimalistic server features and bare minimum size, whereas thefull version is the traditional Server Operating System and it comes with all features installed. No matter which installation option you choose, you will be able to install and run Windows Server Containers. You cannot change the NoGUI version to full version post installation, so make sure you are choosing the right option during installation. You can download the Windows Server ISO from here (https://www.microsoft.com/en-in/evalcenter/evaluate-windows-server-technical-preview, make sure you copy the Activation Keyas well from here) and setup ahostedvirtualization software such as Virtual Box or Hyper-V. Once you start installing from the ISO you will be presented with the following screen, which lets you select full installation with Desktop experience or Just the core with Command Line Interface. Windows Server Containers on Azure On Azure we are going to use an image that comes preinstalled with the containers feature. You can also start with plain Windows Server 2016 on Azure and install the Windows Containers role from Add/Remove Windows Features and then install Docker Engine or use the steps mentioned in Containers on Windows 10. In this article, we are going to create Windows Server 2016 Virtual Machine on Microsoft Azure. The name of the image on Windows Azure is Windows Server 2016 with Containers Tech Preview 5. Microsoft Azure does not support Hyper-V containers on Azure. However, you can deploy these container types when running WS 2016 on premises or on Windows 10. Note:For creating a VMon Azure you would need an Azure account. Microsoft provides a free account for beginners to learn and practice Azure for 30 days/$200. More details regarding creating a free account can be found here (https://azure.microsoft.com/en-in/free/). Container options on WS 2016 TP5 Windows Server 2016 supports two types of containers: Windows Server Containers and Hyper-V Containers. You can run any of these Containers on Windows Server 2016. But for running Hyper-V Containers we would need a container hostthat supports Hyper-V Nested virtualization enabled,which is not mandatory for Windows Server containers. Create Windows Server 2016 TP5 on Azure Azure VMs can be created using a variety of options such as Management Portal (https://manage.windowsazure.com), PowerShell,or Azure CLI. We will be using the new Azure Management Portal (codename Ibiza) for creating an Azure VM. Please follow these steps to create a new Windows Server 2016 Containers Virtual Machine: Login to the Azure Management portal, https://portal.azure.com. Click on +New and select Virtual Machines. Click on See All to get a full list of Virtual Machines. Search using text Windows Server 2016 (with double quotes for full phrase search). You will find three main flavors of Windows Server 2016, as shown in the following screenshot: Click Windows Server 2016with Containers Technical Preview 5 and then click Create on the new blade. Select Resource Manager as thedeployment model. Fill in the parameters as follows: Basic settings:    Name: The name of the Virtual Machine or the host name.    VM Disk Type:SSD (Solid State Disk).    User name: Administrator account username of the machine. This will be used while remotely connecting to the machine using RDP.    Password: Administrator password.    Confirm Password: Repeat administrator password.    Subscription: Select the Azure Subscription to be used to create the machine.    Resource Group: Resource group is a group name for logically grouping resources of a single project. You can use an existing resource group or create a new one.    Location: The geographical location of the Azure Data Center. Choose the nearest available to you. Click Ok. Select Size and click on DS3_V2. For this exercise,we will be using DS3_V2 Standard, which comes with four cores and 14 GB memory. SelectDS3_V2 Standard and click Select: Click on Settings. The Settings section will have most of the parameters pre-filled with defaults. You can leave the defaults unless you want to change anything and then click OK. Check a quick summary of the selections made using the summary tab and click OK. Azure starts creating the VM. The progress will be shown on a new tile added to the dashboard screen,as shown in the following screenshot: Azure might take a couple of minutes or more to create the Virtual Machine and configure extensions. You can check the status on the newly created dashboard tile. Installing Base OS Images and Verifying Installation The following steps will explain how to connect to an Azure VM and verify if the machine is ready for windows server container development: Once the VM is created and running, the status of the VM on the tile will be shown as Running: We can connect to Azure VMs primarily in two ways using RDP and Remote PowerShell. In this sample, we will be using RDP to connect to the Azure VM. Select the tile and click on the Connect icon. This downloads a remote desktop client to your local machine, as shown in the following screenshot: Note: If you are using any browser other than IE 10 or Edge,please check for the .rdp file in the respective downloads folder. Double-click the.rdp file and click Connect to connect to the VM. Enter the Username and Password used while creating the VM to login to the VM. Ignore the Security Certificate Warning and click on Yes. Ensure that the containers feature is installed by running the following command. This should show the Installed State of containers,as shown in the following screenshot: Get-WindowsFeature -Name Containers Ensure that thedocker client and engine is installed using the following commands on PowerShell CLI and verify the version: Docker version Docker Info provides the current state of thedocker daemon,such as the Number of Containers running, paused or stopped, the number of images, and so on, as shown in the following screenshot: Summary In this article, we have learnt how to create and configure windows server container and Hyper-V environment on Windows 10 or Windows Server 2016. Windows 10 professional or Enterprise with Anniversary update can be used to run Hyper-V Containers only. Resources for Article:  Further resources on this subject: Understanding Container Scenarios and Overview of Docker [article] HTML5: Generic Containers [article] OpenVZ Container Administration [article]
Read more
  • 0
  • 0
  • 2239

article-image-pros-and-cons-serverless-model
Packt
06 Apr 2017
22 min read
Save for later

Pros and Cons of the Serverless Model

Packt
06 Apr 2017
22 min read
In this article by Diego Zanon, author of the book Building Serverless Web Applications, we give you an introduction to the Serverless model and the pros and cons that you should consider before building a serverless application. (For more resources related to this topic, see here.) Introduction to the Serverless model Serverless can be a model, a kind of architecture, a pattern or anything else you prefer to call it. For me, serverless is an adjective, a word that qualifies a way of thinking. It's a way to abstract how the code that you write will be executed. Thinking serverless is to not think in servers. You code, you test, you deploy and that's (almost) enough. Serverless is a buzzword. You still need servers to run your applications, but you should not worry about them that much. Maintaining a server is none of your business. The focus is on the development, writing code, and not in the operation. DevOps is still necessary, although with a smaller role. You need to automate deployment and have at least a minimal monitoring of how your application is operating and costing, but you won't start or stop machines to match the usage and you'll neither replace failed instances nor apply security patches to the operating system. Thinking serverless A serverless solution is entirely request-driven. Every time that a user requests some information, a trigger will notify your cloud vendor to pick your code and execute it to retrieve the answer. In contrast, a traditional solution also works to answer requests, but the code is always up and running, consuming machine resources that were reserved specifically for you, even when no one is using your website. In a serverless architecture, it's not necessary to load the entire codebase into a running machine to process a single request. For a faster loading step, only the code that is necessary to answer the request is selected to run. This small piece of the solution is referenced to as a function. So we only run functions on demand. Although we call it simply as a function, it's usually a zipped package that contains a piece of code that runs as an entry point and all the modules that this code depends to execute. What makes the Serverless model so interesting is that you are only billed for the time that was needed to execute your function, which is usually measured in fractions of seconds, not hours of use. If no one is using your service, you pay nothing. Also, if you have a sudden peak of users accessing your app, the cloud service will load different instances to handle all simultaneous requests. If one of those cloud machines fails, another one will be made available automatically, without your interference. Serverless and PaaS Serverless is often confused with Platform as a Service (PaaS). PaaS is a kind of cloud computing model that allows developers to launch applications without worrying about the infrastructure. According to this definition, they have the same objective, which is true. Serverless is like a rebranding of PaaS or you can call it as the next generation of PaaS. The main difference between PaaS and Serverless is that in PaaS you don't manage machines, but you know that they exist. You are billed by provisioning them, even if there is no user actively browsing your website. In PaaS, your code is always running and waiting for new requests. In Serverless, there is a service that is listening for requests and will trigger your code to run only when necessary. This is reflected in your bill. You will pay only for the fractions of seconds that your code was executed and the number of requests that were made to this listener. IaaS and on-premises Besides PaaS, Serverless is frequently compared to Infrastructure as a Service (IaaS) and the on-premises solutions to expose its advantages. IaaS is another strategy to deploy cloud solutions where you hire virtual machines and is allowed to connect to them to configure everything that you need in the guest operating system. It gives you greater flexibility, but comes along with more responsibilities. You need to apply security patches, handle occasional failures and set up new servers to handle usage peaks. Also, you pay the same per hour if you are using 5% or 100% of the machine's CPU. On-premises are the traditional kind of solution where you buy the physical computers and run them inside your company. You get total flexibility and control with this approach. Hosting your own solution can be cheaper, but it happens only when your traffic usage is extremely stable. Over or under provisioning computers is so frequent that it's hard to have real gains using this approach, also when you add the risks and costs to hire a team to manage those machines. Cloud providers may look expensive, but several detailed use cases prove that the return on investment (ROI) is larger running on the cloud than on-premises. When using the cloud, you benefit from the economy of scale of many gigantic data centers. Running by your own exposes your business to a wide range of risks and costs that you'll never be able to anticipate. The main goals of serverless To define a service as serverless, it must have at least the following features: Scale as you need: There is no under or over provisioning Highly available: Fault-tolerant and always online Cost-efficient: Never pay for idle servers Scalability Regarding IaaS, you can achieve infinite scalability with any cloud service. You just need to hire new machines as your usage grows. You can also automate the process of starting and stopping servers as your demand changes. But this is not a fast way to scale. When you start a new machine, you usually need something like 5 minutes before it can be usable to process new requests. Also, as starting and stopping machines is costly, you only do this after you are certain that you need. So, your automated process will wait for some minutes to confirm that your demand has changed before taking any action. IaaS is able to handle well-behaved usage changes, but can't handle unexpected high peaks that happen after announcements or marketing campaigns. With serverless, your scalability is measured in seconds and not minutes. Besides being scalable, it's very fast to scale. Also, it scales per request, without needing to provision capacity. When you consider a high usage frequency in a scale of minutes, IaaS suffers to satisfy the needed capacity while serverless meets even higher usages in less time. In the following figure, the left graph shows how scalability occurs with IaaS. The right graph shows how well the demand can be satisfied using a serverless solution: With an on-premises approach, this is an even bigger problem. As the usage grows, new machines must be bought and prepared. However, increasing the infrastructure requires purchase orders to be created and approved, delay waiting the new servers to arrive and time for the team to configure and test them. It can take weeks to grow, or even months if the company is very big and requests many steps and procedures to be filled in. Availability A highly available solution is one that is fault-tolerant to hardware failures. If one machine goes out, you can keep running with a satisfactory performance. However, if you lose an entire data center due to a power outage, you need machines in another data center to keep the service online. It generally means duplicating your entire infrastructure by placing each half in a different data center. Highly available solutions are usually very expensive in IaaS and on-premises. If you have multiple machines to handle your workload, placing them in different physical places and running a load balance service can be enough. If one data center goes out, you keep the traffic in the remaining machines and scale to compensate. However, there are cases where you pay extra without using those machines. For example, if you have a huge relational database that scaled vertically, you will end up paying for another expensive machine just as a slave to keep the availability. Even for NoSQL databases, if you set a MongoDB replica set in a consistent model, you pay for instances that will act only as secondaries, without serving to alleviate read requests. Instead of running idle machines, you can set them in a cold start state, meaning that the machine is prepared, but is off to reduce costs. However, if you run a website that sells products or services, you can lose customers even in small downtimes. Cold start for web servers can take a few minutes to recover, but needs several minutes for databases. Considering these scenarios, you get high availability for free in serverless. The cost is already considered in what you pay to use. Another aspect of availability is how to handle Distributed Denial of Service (DDoS) attacks. When you receive a huge load of requests in a very short time, how do you handle it? There are some tools and techniques that help mitigate the problem, for example, blacklisting IPs that go over a specific request rate, but before those tools start to work, you need to scale the solution, and it needs to scale really fast to prevent the availability to be compromised. In this, again, serverless has the best scaling speed. Cost efficiency It's impossible to match the traffic usage with what you have provisioned. Considering IaaS or on-premises, as a rule of thumb, CPU and RAM usage must always be lower than 90% to consider the machine healthy, and it is desirable to have a CPU using less than 20% of the capacity on normal traffic. In this case, you are paying for 80% of the waste, where the capacity is in an idle state. Paying for computer resources that you don't use is not efficient. Many cloud providers advertise that you just pay for what you use, but they usually offer significant discounts when you provision for 24 hours of usage in long term (one year or more). This means that you pay for machines that will keep running even in very low traffic hours. Also, even if you want to shut down machines in hours with very low traffic, you need to keep at least a minimum infrastructure 24/7 to keep you web server and databases always online. Regarding high availability, you need extra machines to add redundancy. Again, it's a waste of resources. Another efficiency problem is related with the databases, especially relational ones. Scaling vertically is a very troublesome task, so relational databases are always provisioned considering max peaks. It means that you pay for an expensive machine when most of the time you don't need one. In serverless, you shouldn't worry about provisioning or idle times. You should pay exactly for the CPU and RAM time that is used, measured in fractions of seconds and not hours. If it's a serverless database, you need to store data permanently, so this represents a cost even if no one is using your system. However, storage is usually very cheap. The higher cost, which is related with the database engine that runs queries and manipulates data, will only be billed by the amount of time used without considering idle times. Running a serverless system continuously for one hour has a much higher cost than one hour in a traditional system. However, the difference is that it will never keep one machine with 100% for one hour straight. The cost efficiency of serverless is perceived clearer in websites with varying traffic and not in the ones with flat traffic. Pros We can list the following strengths for the Serverless model: Fast scalability High availability Efficient usage of resources Reduced operational costs Focus on business, not on infrastructure System security is outsourced Continuous delivery Microservices friendly Cost model is startup friendly Let's skip the first three benefits, since they were already covered in the previous section and take a look into the others. Reduced operational costs As the infrastructure is fully managed by the cloud vendor, it reduces the operational costs since you don't need to worry about hardware failures, neither applying security patches to the operating system, nor fixing network issues. It effectively means that you need to spend less sysadmin hours to keep your application running. Also, it helps reducing risks. If you make an investment to deploy a new service and that ends up as a failure, you don't need to worry about selling machines or how to dispose of the data center that you have built. Focus on business Lean software development advocates that you must spend time in what aggregates value to the final product. In a serverless project, the focus is on business. Infrastructure is a second-class citizen. Configuring a large infrastructure is a costly and time consuming task. If you want to validate an idea through a Minimum Viable Product (MVP), without losing time to market, consider using serverless to save time. There are tools that automate the deployment, such as the Serverless Framework, which helps the developer to launch a prototype with minimum effort. If the idea fails, infrastructure costs are minimized since there are no payments in advance. System security The cloud vendor is responsible to manage the security of the operating system, runtime, physical access, networking, and all related technologies that enable the platform to operate. The developer still needs to handle authentication, authorization, and code vulnerabilities, but the rest is outsourced to the cloud provider. It's a positive feature if you consider that a large team of specialists is focused to implement the best security practices and patches new bug fixes as soon as possible to serve hundreds of their customers. That's economy of scale. Continuous delivery Serverless is based on breaking a big project into dozens of packages, each one represented by a top-level function that handles requests. Deploying a new version of a function means uploading a ZIP package to replacing the previous one and updating the endpoint configuration that specifies how this function can be triggered. Executing this task manually, for dozens of functions, is an exhaustive task. Automation is a must-have feature when working in a serverless project. For this task, we can use the Serverless Framework, which helps developers to manage and organize solutions, making a deployment task as simple as executing a one-line command. With automation, continuous delivery is a consequence that brings many benefits like the ability to deploy short development cycles and easier rollbacks anytime. Another related benefit when the deployment is automated is the creation of different environments. You can create a new test environment, which is an exact duplicate of the development environment, using simple commands. The ability to replicate the environment is very important to favor acceptance tests and deployment to production. Microservices friendly A microservices architecture is encouraged in a serverless project. As your functions are single units of deployment, you can have different teams working concurrently on different use cases. You can also use different programming languages in the same project and take advantage of emerging technologies or team skills. Cost model Suppose that you have built a serverless store. The average user will make some requests to see a few products and a few more requests to decide if they will buy something or not. In serverless, a single unit of code has a predictable time to execute for a given input. After collecting some data, you can predict how much a single user costs in average, and this unit cost will almost be constant as your application grows in usage. Knowing how much a single user cost and keeping this number fixed is very important for a startup. It helps to decide how much you need to charge for a service or earn through ads or sales to make a profit. In a traditional infrastructure, you need to make payments in advance, and scaling your application means increasing your capacity in steps. So, calculating the unit cost of a user is more difficult and it's a variable number. The following chart makes this difference easier to understand: Cons Serverless is great, but no technology is a silver bullet. You should be aware of the following issues: Higher latency Constraints Hidden inefficiencies Vendor dependency Debugging Atomic deploys Uncertainties We will discuss these drawbacks in detail now. Higher latency Serverless is request-driven and your code is not running all the time. When a request is made, it triggers a service that finds your function, unzips the package, loads into a container, and makes it available to be executed. The problem is that those steps takes time—up to a few hundreds of milliseconds. This issue is called cold start delay and is a trade-off that exists between serverless cost-effective model and a lower latency of traditional hosting. There are some solutions to fix this performance problem. For example, you can configure your function to reserve more RAM memory. It gives a faster start and overall performance. The programming language is also important. Java has a much higher cold start time then JavaScript (Node.js). Another solution is to benefit from the fact that the cloud provider may cache the loaded code, which means that the first execution will have a delay but further requests will benefit by a smaller latency. You can optimize a serverless function by aggregating a large number of functionalities into a single function. The consequence is that this package will be executed with a higher frequency and will frequently skip the cold start issue. The problem is that a big package will take more time to load and provoke a higher first start time. At a last resort, you could schedule another service to ping your functions periodically, like one time per couple of minutes, to prevent them to be put to sleep. It will add costs, but remove the cold start problem. There is also a concept of serverless databases that references services where the database is fully managed by the vendor and it charges only the storage and the time to execute the database engine. Those solutions are wonderful, but they add even more delay for your requests when compared with traditional databases. Proceed with caution when selecting those. Constraints If you go serverless, you need to know what the vendor constraints are. For example, on Amazon Web Services (AWS), you can't run a Lambda function for more than 5 minutes. It makes sense because if you're doing this, you are using it wrong. Serverless was designed to be cost efficient in short bursts. For constant and predictable processing, it will be expensive. Another constraint on AWS Lambda is the number of concurrent executions across all functions within a given region. Amazon limits this to 100. Suppose that your functions need 100 milliseconds in average to execute. In this scenario, you can handle up to 1000 users per second. The reasoning behind this restriction is to avoid excessive costs due to programming errors that may create potential runways or recursive iterations. AWS Lambda has a default limit of 100 concurrent executions. However, you can file a case into AWS Support Center to raise this limit. If you say that your application is ready for production and that you understand the risks, they will happily increase this value. When monitoring your Lambda functions using Amazon CloudWatch, there is an option called Throttles. Each invocation that exceeds the safety limit of concurrent calls is counted as one throttle. Hidden inefficiencies Some people are calling serverless as a NoOps solution. That's not true. DevOps is still necessary. You don't need to worry much about servers because they are second-class citizens and the focus is on your business. However, adding metrics and monitoring your applications will always be a good practice. It's so easy to scale that a specific function may be deployed with a poor performance that takes much more time than necessary and remains unnoticed forever because no one is monitoring the operation. Also, over or under provisioning is also possible (in a smaller sense) since you need to configure your function setting the amount of RAM memory that it will reserve and the threshold to timeout the execution. It's a very different scale of provisioning, but you need to keep it in mind to avoid mistakes. Vendor dependency When you build a serverless solution, you trust your business in a third-party vendor. You should be aware that companies fail and you can suffer downtimes, security breaches, and performance issues. Also, the vendor may change the billing model, increase costs, introduce bugs into their services, have poor documentations, modify an API forcing you to upgrade, and terminate services. A whole bunch of bad things may happen. What you need to weigh is whether it's worth trusting in another company or making a big investment to build all by yourself. You can mitigate these problems by doing a market search before selecting a vendor. However, you still need to count on luck. For example, Parse was a vendor that offered managed services with really nice features. It was bought by Facebook in 2013, which gave more reliability due to the fact that it was backed by a big company. Unfortunately, Facebook decided to shutdown all servers in 2016 giving one year of notice for customers to migrate to other vendors. Vendor lock-in is another big issue. When you use cloud services, it's very likely that one specific service has a completely different implementation than another vendor, making those two different APIs. You need to rewrite code in case you decide to migrate. It's already a common problem. If you use a managed service to send e-mails, you need to rewrite part of your code before migrating to another vendor. What raises a red flag here is that a serverless solution is entirely based into one vendor and migrating the entire codebase can be much more troublesome. To mitigate this problem, some tools like the Serverless Framework are moving to include multivendor support. Currently, only AWS is supported but they expect to include Microsoft, Google, and IBM clouds in the future, without requiring code rewrites to migrate. Multivendor support represents safety for your business and gives power to competitiveness. Debugging Unit testing a serverless solution is fairly simple because any code that your functions relies on can be separated into modules and unit tested. Integration tests are a little bit more complicated because you need to be online to test with external services. You can build stubs, but you lose some fidelity in your tests. When it comes to debugging to test a feature or fix an error, it's a whole different problem. You can't hook into an external service and make slow processing steps to see how your code behaves. Also, those serverless APIs are not open source, so you can't run them in-house for testing. All you have is the ability to log steps, which is a slow debugging approach, or you can extract the code and adapt it to host into your own servers and make local calls. Atomic deploys Deploying a new version of a serverless function is easy. You update the code and the next time that a trigger requests this function, your newly deployed code will be selected to run. This means that, for a brief moment, two instances of the same function can be executed concurrently with different implementations. Usually, that's not a problem, but when you deal with persistent storage and databases, you should be aware that a new piece of code can insert data into a format that an old version can't understand. Also, if you want to deploy a function that relies on a new implementation of another function, you need to be careful in the order that you deploy those functions. Ordering is often not secured by the tools that automate the deployment process. The problem here is that current serverless implementations consider that deployment is an atomic process for each function. You can't batch deploy a group of functions atomically. You can mitigate this issue by disabling the event source while you deploy a specific group, but that means introducing a downtime into the deployment process, or you can use a monolith approach instead of a microservices architecture for serverless applications. Uncertainties Serverless is still a pretty new concept. Early adopters are braving this field testing what works and which kind patterns and technologies can be used. Emerging tools are defining the development process. Vendors are releasing and improving new services. There are high expectations for the future, but the future hasn't come yet. Some uncertainties still worry developers when it comes to build large applications. Being a pioneer can be rewarding, but risky. Technical debt is a concept that compares software development with finances. The easiest solution in the short run is not always the best overall solution. When you take a bad decision in the beginning, you pay later with extra hours to fix it. Software is not perfect. Every single architecture has pros and cons that append technical debt in the long run. The question is: how much technical debt does serverless aggregate to the software development process? Is it more, less, or equivalent to the kind of architecture that you are using today? Summary In this article, you learned about the Serverless model and how it is different from other traditional approaches. You already know what the main benefits and advantages are that it may offer for your next application. Also, you are aware that no technology is a silver bullet. You know what kind of problems you may have with serverless and how mitigate some of them. Resources for article Refer to Building Serverless Architectures, available at https://www.packtpub.com/application-development/building-serverless-architectures for further resources on this subject. Resources for Article: Further resources on this subject: Deployment and DevOps [article] Customizing Xtext Components [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 2049

article-image-integrating-messages-app
Packt
06 Apr 2017
17 min read
Save for later

Integrating with Messages App

Packt
06 Apr 2017
17 min read
In this article by Hossam Ghareeb, the author of the book, iOS Programming Cookbook, we will cover the recipe Integrating iMessage app with iMessage app. (For more resources related to this topic, see here.) Integrating iMessage app with iMessage app Using iMessage apps will let users use your apps seamlessly from iMessage without having to leave the iMessage. Your app can share content in the conversation, make payment, or do any specific job that seems important or is appropriate to do within a Messages app. Getting ready Similar to the Stickers app we created earlier, you need Xcode 8.0 or later version to create an iMessage app extension and you can test it easily in the iOS simulator. The app that we are going to build is a Google drive picker app. It will be used from an iMessage extension to send a file to your friends just from Google Drive. Before starting, ensure that you follow the instructions in Google Drive API for iOS from https://developers.google.com/drive/ios/quickstart to get a client key to be used in our app. Installing the SDK in Xcode will be done via CocoaPods. To get more information about CocoaPods and how to use it to manage dependencies, visit https://cocoapods.org/ . How to do it… We Open Xcode and create a new iMessage app, as shown, and name itFiles Picker:   Now, let's install Google Drive SDK in iOS using CocoaPods. Open terminal and navigate to the directory that contains your Xcode project by running this command: cd path_to_directory Run the following command to create a Pod file to write your dependencies: Pod init It will create a Pod file for you. Open it via TextEdit and edit it to be like this: use_frameworks! target 'PDFPicker' do end target 'MessagesExtension' do pod 'GoogleAPIClient/Drive', '~> 1.0.2' pod 'GTMOAuth2', '~> 1.1.0' end Then, close the Xcode app completely and run the pod install command to install the SDK for you. A new workspace will be created. Open it instead of the Xcode project itself. Prepare the client key from the Google drive app you created as we mentioned in the Getting ready section, because we are going to use it in the Xcode project. Open MessagesViewController.swift and add the following import statements: import GoogleAPIClient import GTMOAuth2 Add the following private variables just below the class declaration and embed your client key in the kClientID constant, as shown: private let kKeychainItemName = "Drive API" private let kClientID = "Client_Key_Goes_HERE" private let scopes = [kGTLAuthScopeDrive] private let service = GTLServiceDrive() Add the following code in your class to request authentication to Google drive if it's not authenticated and load file info: override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. if let auth = GTMOAuth2ViewControllerTouch.authForGoogleFromKeychain(forName: kKeychainItemName, clientID: kClientID, clientSecret: nil) { service.authorizer = auth } } // When the view appears, ensure that the Drive API service is authorized // and perform API calls override func viewDidAppear(_ animated: Bool) { if let authorizer = service.authorizer, canAuth = authorizer.canAuthorize where canAuth { fetchFiles() } else { present(createAuthController(), animated: true, completion: nil) } } // Construct a query to get names and IDs of 10 files using the Google Drive API func fetchFiles() { print("Getting files...") if let query = GTLQueryDrive.queryForFilesList(){ query.fields = "nextPageToken, files(id, name, webViewLink, webContentLink, fileExtension)" service.executeQuery(query, delegate: self, didFinish: #selector(MessagesViewController.displayResultWithTicket(ticket:finishedWit hObject:error:))) } } // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" let files = response.files as! [GTLDriveFile] if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) } // Creates the auth controller for authorizing access to Drive API private func createAuthController() -> GTMOAuth2ViewControllerTouch { let scopeString = scopes.joined(separator: " ") return GTMOAuth2ViewControllerTouch( scope: scopeString, clientID: kClientID, clientSecret: nil, keychainItemName: kKeychainItemName, delegate: self, finishedSelector: #selector(MessagesViewController.viewController(vc:finishedWithAuth:error:) ) ) } // Handle completion of the authorization process, and update the Drive API // with the new credentials. func viewController(vc : UIViewController, finishedWithAuth authResult : GTMOAuth2Authentication, error : NSError?) { if let error = error { service.authorizer = nil showAlert(title: "Authentication Error", message: error.localizedDescription) return } service.authorizer = authResult dismiss(animated: true, completion: nil) fetchFiles() } // Helper for showing an alert func showAlert(title : String, message: String) { let alert = UIAlertController( title: title, message: message, preferredStyle: UIAlertControllerStyle.alert ) let ok = UIAlertAction( title: "OK", style: UIAlertActionStyle.default, handler: nil ) alert.addAction(ok) self.present(alert, animated: true, completion: nil) } The code now requests authentication, loads files, and then prints them in the debug area. Now, try to build and run, you will see the following: Click on the arrow button in the bottom right corner to maximize the screen and try to log in with any Google account you have. Once the authentication is done, you will see the files' information printed in the debug area. Now, let's add a table view that will display the files' information and once a user selects a file, we will download this file to send it as an attachment to the conversation. Now, open theMainInterface.storyboard, drag a table view from Object Library, and add the following constraints: Set the delegate and data source of the table view from interface builder by dragging while holding down the Ctrl key to theMessagesViewController. Then, add an outlet to the table view, as follows, to be used to refresh the table with the files:  Drag a UITabeView cell from Object Library and drop it in the table view. For Attribute Inspector, set the cell style to Basic and the identifier to cell. Now, return to MessagesViewController.swift. Add the following property to hold the current display files: private var currentFiles = [GTLDriveFile]() Edit the displayResultWithTicket function to be like this: // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, error : NSError?) { if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" let files = response.files as! [GTLDriveFile] self.currentFiles = files if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) self.filesTableView.reloadData() } Now, add the following method for the table view delegate and data source: // MARK: - Table View methods - func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return self.currentFiles.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "cell") let file = self.currentFiles[indexPath.row] cell?.textLabel?.text = file.name return cell! } func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let file = self.currentFiles[indexPath.row] // Download File here to send as attachment. if let downloadURLString = file.webContentLink{ let url = NSURL(string: downloadURLString) if let name = file.name{ let downloadedPath = (documentsPath() as NSString).appendingPathComponent("(name)") let fetcher = service.fetcherService.fetcher(with: url as! URL) let destinationURL = NSURL(fileURLWithPath: downloadedPath) as URL fetcher.destinationFileURL = destinationURL fetcher.beginFetch(completionHandler: { (data, error) in if error == nil{ self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) } }) } } } private func documentsPath() -> String{ let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true) return paths.first ?? "" } Now, build and run the app, and you will see the magic: select any file and the app will download and save it to the local disk and send it as an attachment to the conversation, as illustrated: How it works… We started by installing the Google Drive SDK to the Xcode project. This SDK has all the APIs that we need to manage drive files and user authentication. When you visit the Google developers' website, you will see two options to install the SDK: manually or using CocoaPods. I totally recommend using CocoaPods to manage your dependencies as it is simple and efficient. Once the SDK has been installed via CocoaPods, we added some variables to be used for the Google Drive API and the most important one is the client key. You can access this value from the project you have created in the Google Developers Console. In the viewDidLoad function, first, we check if we have an authentication saved in KeyChain, and then, we use it. We can do that by calling GTMOAuth2ViewControllerTouch.authForGoogleFromKeychain, which takes the Keychain name and client key as parameters to search for authentication. It's useful as it helps you remember the last authentication and there is no need to ask for user authentication again if a user has already been authenticated before. In viewDidAppear, we check if a user is already authenticated; so, in that case, we start fetching files from the drive and, if not, we display the authentication controller, which asks a user to enter his Google account credentials. To display the authentication controller, we present the authentication view controller created in the createAuthController() function. In this function, the Google Drive API provides us with the GTMOAuth2ViewControllerTouch class, which encapsulates all logic for Google account authentication for your app. You need to pass the client key for your project, keychain name to save the authentication details there, and the finished  viewController(vc : UIViewController, finishedWithAuth authResult : GTMOAuth2Authentication, error : NSError?) selector that will be called after the authentication is complete. In that function, we check for errors and if something wrong happens, we display an alert message to the user. If no error occurs, we start fetching files using the fetchFiles() function. In the fetchFiles() function, we first create a query by calling GTLQueryDrive.queryForFilesList(). The GTLQueryDrive class has all the information you need about your query, such as which fields to read, for example, name, fileExtension, and a lot of other fields that you can fetch from the Google drive. You can specify the page size if you are going to call with pagination, for example, 10 by 10 files. Once you are happy with your query, execute it by calling service.executeQuery, which takes the query and the finished selector to be called when finished. In our example, it will call the displayResultWithTicket function, which prepares the files to be displayed in the table view. Then, we call self.filesTableView.reloadData() to refresh the table view to display the list of files. In the delegate function of table view didSelectRowAt indexPath:, we first read the webContentLink property from the GTLDriveFile instance, which is a download link for the selected file. To fetch a file from the Google drive, the API provides us with GTMSessionFetcher that can fetch a file and write it directly to a device's disk locally when you pass a local path to it. To create GTMSessionFetcher, use the service.fetcherService factory class, which gives you instance to a fetcher via the file URL. Then, we create a local path to the downloaded file by appending the filename to the documents path of your app and then, pass it to fetcher via the following command: fetcher.destinationFileURL = destinationURL Once you set up everything, call fetcher.beginFetch and pass a completion handler to be executed after finishing the fetching. Once the fetching is completed successfully, you can get a reference to the current conversation so that you can insert the file to it as an attachment. To do this, just call the following function: self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) There's more… Yes, there's more that you can do in the preceding example to make it fancier and more appealing to users. Check the following options to make it better: You can show a loading indicator or progress bar while a file is downloading. Checks if the file is already downloaded, and if so, there is no need to download it again. Adding pagination to request only 10 files at a time. Options to filter documents by type, such as PDF, images, or even by date. Search for a file in your drive. Showing Progress indicator As we said, one of the features that we can add in the preceding example is the ability to show a progress bar indicating the downloading progress of a file. Before starting how to show a progress bar, let's install a library that is very helpful in managing/showing HUD indicators, which is MBProgressHUD. This library is available in GitHub at https://github.com/jdg/MBProgressHUD. As we agreed before, all packages are managed via CocoaPods, so now, let's install the library via CocoaPods, as shown: Open the Podfile and update it to be as follows: use_frameworks! target 'PDFPicker' do end target 'MessagesExtension' do pod 'GoogleAPIClient/Drive', '~> 1.0.2' pod 'GTMOAuth2', '~> 1.1.0' pod 'MBProgressHUD', '~> 1.0.0' end Run the following command to install the dependencies: pod install Now, at the top of the MessagesViewController.swift file, add the following import statement to import the library: Now, let's edit the didSelectRowAtIndexPath function to be like this: func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let file = self.currentFiles[indexPath.row] // Download File here to send as attachment. if let downloadURLString = file.webContentLink{ let url = NSURL(string: downloadURLString) if let name = file.name{ let downloadedPath = (documentsPath() as NSString).appendingPathComponent("(name)") let fetcher = service.fetcherService.fetcher(with: url as! URL) let destinationURL = NSURL(fileURLWithPath: downloadedPath) as URL fetcher.destinationFileURL = destinationURL var progress = Progress() let hud = MBProgressHUD.showAdded(to: self.view, animated: true) hud.mode = .annularDeterminate; hud.progressObject = progress fetcher.beginFetch(completionHandler: { (data, error) in if error == nil{ hud.hide(animated: true) self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) } }) fetcher.downloadProgressBlock = { (bytes, written, expected) in let p = Double(written) * 100.0 / Double(expected) print(p) progress.totalUnitCount = expected progress.completedUnitCount = written } } } } First, we create an instance of MBProgressHUD and set its type to annularDeterminate, which means to display a circular progress bar. HUD will update its progress by taking a reference to the NSProgress object. Progress has two important variables to determine the progress value, which are totalUnitCount and completedUnitCount. These two values will be set inside the progress completion block, downloadProgressBlock, in the fetcher instance. HUD will be hidden in the completion block that will be called once the download is complete. Now build and run; after authentication, when you click on a file, you will see something like this: As you can see, the progressive view is updated with the percentage of download to give the user an overview of what is going on. Request files with pagination Loading all files at once is easy from the development side, but it's incorrect from the user experience side. It will take too much time at the beginning when you get the list of all the files and it would be great if we could request only 10 files at a time with pagination. In this section, we will see how to add the pagination concept to our example and request only 10 files at a time. When a user scrolls to the end of the list, we will display a loading indicator, call the next page, and append the results to our current results. Implementation of pagination is pretty easy and requires only a few changes in our code. Let's see how to do it: We will start by adding the progress cell design in MainInterface.storyboard. Open the design of MessagesViewController and drag a new cell along with our default cell. Drag a UIActivityIndicatorView from ObjectLibrary and place it as a subview to the new cell. Add center constraints to center it horizontally and vertically as shown: Now, select the new cell and go to attribute inspector to add an identifier to the cell and disable the selection as illustrated: Now, from the design side, we are ready. Open MessagesViewController.swift to add some tweaks to it. Add the following two variables to the list of our current variables: private var doneFetchingFiles = false private var nextPageToken: String! The doneFetchingFiles flag will be used to hide the progress cell when we try to load the next page from Google Drive and returns an empty list. In that case, we know that we are done with the fetching files and there is no need to display the progress cell any more. The nextPageToken contains the token to be passed to the GTLQueryDrive query to ask it to load the next page. Now, go to the fetchFiles() function and update it to be as shown: func fetchFiles() { print("Getting files...") if let query = GTLQueryDrive.queryForFilesList(){ query.fields = "nextPageToken, files(id, name, webViewLink, webContentLink, fileExtension)" query.mimeType = "application/pdf" query.pageSize = 10 query.pageToken = nextPageToken service.executeQuery(query, delegate: self, didFinish: #selector(MessagesViewController.displayResultWithTicket(ticket:finishedWit hObject:error:))) } } The only difference you can note between the preceding code and the one before that is setting the pageSize and pageToken. For pageSize, we set how many files we require for each call and for pageToken, we pass the token to get the next page. We receive this token as a response from the previous page call. This means that, at the first call, we don't have a token and it will be passed as nil. Now, open the displayResultWithTicket function and update it like this: // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, error : NSError?) { if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" nextPageToken = response.nextPageToken let files = response.files as! [GTLDriveFile] doneFetchingFiles = files.isEmpty self.currentFiles += files if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) self.filesTableView.reloadData() } As you can see, we first get the token that is to be used to load the next page. We get it by calling response.nextPageToken and setting it to our new  nextPageToken property so that we can use it while loading the next page. The doneFetchingFiles will be true only if the current page we are loading has no files, which means that we are done. Then, we append the new files we get to the current files we have. We don't know when to fire the calling of the next page. We will do this once the user scrolls down to the refresh cell that we have. To do so, we will implement one of the UITableViewDelegate methods, which is willDisplayCell, as illustrated: func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { if !doneFetchingFiles && indexPath.row == self.currentFiles.count { // Refresh cell fetchFiles() return } } For any cell that is going to be displayed, this function will be triggered with indexPath of the cell. First, we check if we are not done with the fetching files and the row is equal to the last row, then, we fire fetchFiles() again to load the next page. As we added a new refresh cell at the bottom, we should update our UITableViewDataSource functions, such as numbersOfRowsInSection and cellForRow. Check our updated functions, shown as follows: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return doneFetchingFiles ? self.currentFiles.count : self.currentFiles.count + 1 } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { if !doneFetchingFiles && indexPath.row == self.currentFiles.count{ return tableView.dequeueReusableCell(withIdentifier: "progressCell")! } let cell = tableView.dequeueReusableCell(withIdentifier: "cell") let file = self.currentFiles[indexPath.row] cell?.textLabel?.text = file.name return cell! } As you can see, the number of rows will be equal to the current files' count plus one for the refresh cell. If we are done with the fetching files, we will return only the number of files. Now, everything seems perfect. When you build and run, you will see only 10 files listed, as shown: And when you scroll down you would see the progress cell that and 10 more files will be called. Summary In this article, we learned how to integrate iMessage app with iMessage app. Resources for Article: Further resources on this subject: iOS Security Overview [article] Optimizing JavaScript for iOS Hybrid Apps [article] Testing our application on an iOS device [article]
Read more
  • 0
  • 0
  • 32818
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-supervised-learning-classification-and-regression
Packt
06 Apr 2017
30 min read
Save for later

Supervised Learning: Classification and Regression

Packt
06 Apr 2017
30 min read
In this article by Alexey Grigorev, author of the book Mastering Java for Data Science, we will look at how to do pre-processing of data in Java and how to do Exploratory Data Analysis inside and outside Java. Now, when we covered the foundation, we are ready to start creating Machine Learning models. First, we start with Supervised Learning. In the supervised settings we have some information attached to each observation – called labels – and we want to learn from it, and predict it for observations without labels. There are two types of labels: the first are discrete and finite, such as True/False or Buy/Sell, and second are continuous, such as salary or temperature. These types correspond to two types of Supervised Learning: Classification and Regression. We will talk about them in this article. This article covers: Classification problems Regression Problems Evaluation metrics for each type Overview of available implementations in Java (For more resources related to this topic, see here.) Classification In Machine Learning, the classification problem deals with discrete targets with finite set of possible values. The Binary Classification is the most common type of classification problems: the target variable can have only two possible values, such as True/False, Relevant/Not Relevant, Duplicate/Not Duplicate, Cat/Dog and so on. Sometimes the target variable can have more than two outcomes, for example: colors, category of an item, model of a car, and so on, and we call it Multi-Class Classification. Typically each observation can only have one label, but in some settings an observation can be assigned several values. Typically, Multi-Class Classification can be converted to a set of Binary Classification problems, which is why we will mostly concentrate on Binary Classification. Binary Classification Models There are many models for solving the binary classification problem and it is not possible to cover all of them. We will briefly cover the ones that are most often used in practice. They include: Logistic Regression Support Vector Machines Decision Trees Neural Networks We assume that you are already familiar with these methods. Deep familiarity is not required, but for more information you can check the following book: An Introduction to Statistical Learning by G. James, D. Witten, T. Hastie and R. Tibshirani Python Machine Learning by S. Raschka When it comes to libraries, we will cover the following: Smile, JSAT, LIBSVM and LIBLINEAR and Encog. Smile Smile (Statistical Machine Intelligence and Learning Engine) is a library with a large set of classification and other machine learning algorithms. You can see the full list here: https://github.com/haifengl/smile. The library is available on Maven Central and the latest version at the moment of writing is 1.1.0. To include it to your project, add the following dependency: <dependency> <groupId>com.github.haifengl</groupId> <artifactId>smile-core</artifactId> <version>1.1.0</version> </dependency> It is being actively developed; new features and bug fixes are added quite often, but not released as frequently. We recommend to use the latest available version of Smile, and to get it, you will need to build it from the sources. To do it: Install sbt – a tool for building scala projects. You can follow this instruction: http://www.scala-sbt.org/release/docs/Manual-Installation.html Use git to clone the project from https://github.com/haifengl/smile To build and publish the library to local maven repository, run the following command: sbt core/publishM2 The Smile library consists of several sub-modules, such as smile-core, smile-nlp, and smile-plot and so on. So, after building it, add the following dependency to your pom: <dependency> <groupId>com.github.haifengl</groupId> <artifactId>smile-core</artifactId> <version>1.2.0</version> </dependency> The models from Smile expect the data to be in a form of two-dimensional arrays of doubles, and the label information is one dimensional array of integers. For binary models, the values should be 0 or 1. Some models in Smile can handle Multi-Class Classification problems, so it is possible to have more labels. The models are built using the Builder pattern: you create a special class, set some parameters and at the end it returns the object it builds. In Smile this builder class is typically called Trainer, and all models should have a trainer for them. For example, consider training a Random Forest model: double[] X = ...// training data int[] y = ...// 0 and 1 labels RandomForest model = new RandomForest.Trainer() .setNumTrees(100) .setNodeSize(4) .setSamplingRates(0.7) .setSplitRule(SplitRule.ENTROPY) .setNumRandomFeatures(3) .train(X, y); The RandomForest.Trainer class takes in a set of parameters and the training data, and at the end produces the trained Random Forest model. The implementation of Random Forest from Smile has the following parameters: numTrees: number of trees to train in the model. nodeSize: the minimum number of items in the leaf nodes. samplingRate: the ratio of training data used to grow each tree. splitRule: the impurity measure used for selecting the best split. numRandomFeatures: the number of features the model randomly chooses for selecting the best split. Similarly, a logistic regression is trained as follows: LogisticRegression lr = new LogisticRegression.Trainer() .setRegularizationFactor(lambda) .train(X, y); Once we have a model, we can use it for predicting the label of previously unseen items. For that we use the predict method: double[] row = // data int prediction = model.predict(row); This code outputs the most probable class for the given item. However, often we are more interested not in the label itself, but in the probability of having the label. If a model implements the SoftClassifier interface, then it is possible to get these probabilities like this: double[] probs = new double[2]; model.predict(row, probs); After running this code, the probs array will contain the probabilities. JSAT JSAT (Java Statistical Analysis Tool) is another Java library which contains a lot of implementations of common Machine Learning algorithms. You can check the full list of implemented models at https://github.com/EdwardRaff/JSAT/wiki/Algorithms. To include JSAT to a Java project, add this to pom: <dependency> <groupId>com.edwardraff</groupId> <artifactId>JSAT</artifactId> <version>0.0.5</version> </dependency> Unlike Smile, which just takes arrays of doubles, JSAT requires a special wrapper class for data instances. If we have an array, it is converted to the JSAT representation like this: double[][] X = ... // data int[] y = ... // labels // change to more classes for more classes for multi-classification CategoricalData binary = new CategoricalData(2); List<DataPointPair<Integer>> data = new ArrayList<>(X.length); for (int i = 0; i < X.length; i++) { int target = y[i]; DataPoint row = new DataPoint(new DenseVector(X[i])); data.add(new DataPointPair<Integer>(row, target)); } ClassificationDataSet dataset = new ClassificationDataSet(data, binary); Once we have prepared the dataset, we can train a model. Let us consider the Random Forest classifier again: RandomForest model = new RandomForest(); model.setFeatureSamples(4); model.setMaxForestSize(150); model.trainC(dataset); First, we set some parameters for the model, and then, at we end, we call the trainC method (which means “train a classifier”). In the JSAT implementation, Random Forest has fewer options for tuning: only the number of features to select and the number of trees to grow. There are several implementations of Logistic Regression. The usual Logistic Regression model does not have any parameters, and it is trained like this: LogisticRegression model = new LogisticRegression(); model.trainC(dataset); If we want to have a regularized model, then we need to use the LogisticRegressionDCD class (DCD stands for “Dual Coordinate Descent” - this is the optimization method used to train the logistic regression). We train it like this: LogisticRegressionDCD model = new LogisticRegressionDCD(); model.setMaxIterations(maxIterations); model.setC(C); model.trainC(fold.toJsatDataset()); In this code, C is the regularization parameter, and the smaller values of C correspond to stronger regularization effect. Finally, for outputting the probabilities, we can do the following: double[] row = // data DenseVector vector = new DenseVector(row); DataPoint point = new DataPoint(vector); CategoricalResults out = model.classify(point); double probability = out.getProb(1); The class CategoricalResults contains a lot of information, including probabilities for each class and the most likely label. LIBSVM and LIBLINEAR Next we consider two similar libraries: LIBSVM and LIBLINEAR. LIBSVM (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) is a library with implementation of Support Vector Machine models, which include support vector classifiers. LIBLINEAR (https://www.csie.ntu.edu.tw/~cjlin/liblinear/) is a library for fast linear classification algorithms such as Liner SVM and Logistic Regression. Both these libraries come from the same research group and have very similar interfaces. LIBSVM is implemented in C++ and has an officially supported java version. It is available on Maven Central: <dependency> <groupId>tw.edu.ntu.csie</groupId> <artifactId>libsvm</artifactId> <version>3.17</version> </dependency> Note that the Java version of LIBSVM is updated not as often as the C++ version. Nevertheless, the version above is stable and should not contain bugs, but it might be slower than its C++ version. To use SVM models from LIBSVM, you first need to specify the parameters. For this, you create a svm_parameter class. Inside, you can specify many parameters, including: the kernel type (RBF, POLY or LINEAR), the regularization parameter C, probability, which you can set to 1 to be able to get probabilities, the svm_type should be set to C_SVC: this tells that the model should be a classifier. Here is an example of how you can configure an SVM classifier with the Linear kernel which can output probabilities: svm_parameter param = new svm_parameter(); param.svm_type = svm_parameter.C_SVC; param.kernel_type = svm_parameter.LINEAR; param.probability = 1; param.C = C; // default parameters param.cache_size = 100; param.eps = 1e-3; param.p = 0.1; param.shrinking = 1; The polynomial kernel is specified by the following formula: It has three additional parameters: gamma, coeff0 and degree; and also C – the regularization parameter. You can specify it like this: svm_parameter param = new svm_parameter(); param.svm_type = svm_parameter.C_SVC; param.kernel_type = svm_parameter.POLY; param.C = C; param.degree = degree; param.gamma = 1; param.coef0 = 1; param.probability = 1; // plus defaults from the above Finally, the Gaussian kernel (or RBF) has the following formula:' So there is one parameter gamma, which controls the width of the Gaussians. We can specify the model with the RBF kernel like this: svm_parameter param = new svm_parameter(); param.svm_type = svm_parameter.C_SVC; param.kernel_type = svm_parameter.RBF; param.C = C; param.gamma = gamma; param.probability = 1; // plus defaults from the above Once we created the configuration object, we need to convert the data in the right format. The LIBSVM command line application reads files in the SVMLight format, so the library also expects sparse data representation. For a single array, the conversion is following: double[] dataRow = // single row vector svm_node[] svmRow = new svm_node[dataRow.length]; for (int j = 0; j < dataRow.length; j++) { svm_node node = new svm_node(); node.index = j; node.value = dataRow[j]; svmRow[j] = node; } For a matrix, we do this for every row: double[][] X = ... // data int n = X.length; svm_node[][] nodes = new svm_node[n][]; for (int i = 0; i < n; i++) { nodes[i] = wrapAsSvmNode(X[i]); } Where wrapAsSvmNode is a function, that wraps a vector into an array of svm_node objects. Now we can put the data and the labels together into svm_problem object: double[] y = ... // labels svm_problem prob = new svm_problem(); prob.l = n; prob.x = nodes; prob.y = y; Now using the data and the parameters we can train the SVM model: svm_model model = svm.svm_train(prob, param);  Once the model is trained, we can use it for classifying unseen data. Getting probabilities is done this way: double[][] X = // test data int n = X.length; double[] results = new double[n]; double[] probs = new double[2]; for (int i = 0; i < n; i++) { svm_node[] row = wrapAsSvmNode(X[i]); svm.svm_predict_probability(model, row, probs); results[i] = probs[1]; } Since we used the param.probability = 1, we can use svm.svm_predict_probability method to predict probabilities. Like Smile, the method takes an array of doubles and writes the results there. Then we can get the probabilities there. Finally, while training, LIBSVM outputs a lot of things on the console. If we are not interested in this output, we can disable it with the following code snippet: svm.svm_set_print_string_function(s -> {}); Just add this in the beginning of your code and you will not see this anymore. The next library is LIBLINEAR, which provides very fast and high performing linear classifiers such as SVM with Linear Kernel and Logistic Regression. It can easily scale to tens and hundreds of millions of data points. Unlike LIBSVM, there is no official Java version of LIBLINEAR, but there is unofficial Java port available at http://liblinear.bwaldvogel.de/. To use it, include the following: <dependency> <groupId>de.bwaldvogel</groupId> <artifactId>liblinear</artifactId> <version>1.95</version> </dependency> The interface is very similar to LIBSVM. First, you define the parameters: SolverType solverType = SolverType.L1R_LR; double C = 0.001; double eps = 0.0001; Parameter param = new Parameter(solverType, C, eps); We have to specify three parameters here: solverType: defines the model which will be used; C: the amount regularization, the smaller C the stronger the regularization; epsilon: tolerance for stopping the training process. A reasonable default is 0.0001; For classification there are the following solvers: Logistic Regression: L1R_LR or L2R_LR SVM: L1R_L2LOSS_SVC or L2R_L2LOSS_SVC According to the official FAQ (which can be found here: https://www.csie.ntu.edu.tw/~cjlin/liblinear/FAQ.html) you should: Prefer SVM to Logistic Regression as it trains faster and usually gives higher accuracy. Try L2 regularization first unless you need a sparse solution – in this case use L1 The default solver is L2-regularized support linear classifier for the dual problem. If it is slow, try solving the primal problem instead. Then you define the dataset. Like previously, let's first see how to wrap a row: double[] row = // data int m = row.length; Feature[] result = new Feature[m]; for (int i = 0; i < m; i++) { result[i] = new FeatureNode(i + 1, row[i]); } Please note that we add 1 to the index – the 0 is the bias term, so the actual features should start from 1. We can put this into a function wrapRow and then wrap the entire dataset as following: double[][] X = // data int n = X.length; Feature[][] matrix = new Feature[n][]; for (int i = 0; i < n; i++) { matrix[i] = wrapRow(X[i]); } Now we can create the Problem class with the data and labels: double[] y = // labels Problem problem = new Problem(); problem.x = wrapMatrix(X); problem.y = y; problem.n = X[0].length + 1; problem.l = X.length; Note that here we also need to provide the dimensionality of the data, and it's the number of features plus one. We need to add one because it includes the bias term. Now we are ready to train the model: Model model = LibLinear.train(fold, param); When the model is trained, we can use it to classify unseen data. In the following example we will output probabilities: double[] dataRow = // data Feature[] row = wrapRow(dataRow); Linear.predictProbability(model, row, probs); double result = probs[1]; The code above works fine for the Logistic Regression model, but it will not work for SVM: SVM cannot output probabilities, so the code above will throw an error for solvers like L1R_L2LOSS_SVC. What you can do instead is get the raw output: double[] values = new double[1]; Feature[] row = wrapRow(dataRow); Linear.predictValues(model, row, values); double result = values[0]; In this case the results will not contain probability, but some real value. When this value is greater than zero, the model predicts that the class is positive. If we would like to map this value to the [0, 1] range, we can use the sigmoid function for that: public static double[] sigmoid(double[] scores) { double[] result = new double[scores.length]; for (int i = 0; i < result.length; i++) { result[i] = 1 / (1 + Math.exp(-scores[i])); } return result; } Finally, like LIBSVM, LIBLINEAR also outputs a lot of things to standard output. If you do not wish to see it, you can mute it with the following code: PrintStream devNull = new PrintStream(new NullOutputStream()); Linear.setDebugOutput(devNull); Here, we use NullOutputStream from Apache IO which does nothing, so the screen stays clean. When to use LIBSVM and when to use LIBLINEAR? For large datasets often it is not possible to use any kernel methods. In this case you should prefer LIBLINEAR. Additionally, LIBLINEAR is especially good for Text Processing purposes such as Document Classification. Encog Finally, we consider a library for training Neural Networks: Encog. It is available on Maven Central and can be added with the following snippet: <dependency> <groupId>org.encog</groupId> <artifactId>encog-core</artifactId> <version>3.3.0</version> </dependency> With this library you first need to specify the network architecture: BasicNetwork network = new BasicNetwork(); network.addLayer(new BasicLayer(new ActivationSigmoid(), true, noInputNeurons)); network.addLayer(new BasicLayer(new ActivationSigmoid(), true, 30)); network.addLayer(new BasicLayer(new ActivationSigmoid(), true, 1)); network.getStructure().finalizeStructure(); network.reset(); Here we create a network with one input layer, one inner layer with 30 neurons and one output layer with 1 neuron. In each layer we use sigmoid as the activation function and add the bias input (the true parameter). The last line randomly initializes the weights in the network. For both input and output Encog expects two-dimensional double arrays. In case of binary classification we typically have a one dimensional array, so we need to convert it: double[][] X = // data double[] y = // labels double[][] y2d = new double[y.length][]; for (int i = 0; i < y.length; i++) { y2d[i] = new double[] { y[i] }; } Once the data is converted, we wrap it into special wrapper class: MLDataSet dataset = new BasicMLDataSet(X, y2d); Then this dataset can be used for training: MLTrain trainer = new ResilientPropagation(network, dataset); double lambda = 0.01; trainer.addStrategy(new RegularizationStrategy(lambda)); int noEpochs = 101; for (int i = 0; i < noEpochs; i++) { trainer.iteration(); } There are a lot of other Machine Learning libraries available in Java. For example Weka, H2O, JavaML and others. It is not possible to cover all of them, but you can also try them and see if you like them more than the ones we have covered. Evaluation We have covered many Machine Learning libraries, and many of them implement the same algorithms like Random Forest or Logistic Regression. Also, each individual model can have many different parameters: a Logistic Regression has the regularization coefficient, an SVM is configured by setting the kernel and its parameters. How do we select the best single model out of so many possible variants? For that we first define some evaluation metric and then select the model which achieves the best possible performance with respect to this metric. For binary classification we can select one of the following metrics: Accuracy and Error Precision, Recall and F1 AUC (AU ROC) Result Evaluation K-Fold Cross Validation Training, Validation and Testing. Accuracy Accuracy tells us for how many examples the model predicted the correct label. Calculating it is trivial: int n = actual.length; double[] proba = // predictions; double[] prediction = Arrays.stream(proba).map(p -> p > threshold ? 1.0 : 0.0).toArray(); int correct = 0; for (int i = 0; i < n; i++) { if (actual[i] == prediction[i]) { correct++; } } double accuracy = 1.0 * correct / n; Accuracy is the simplest evaluation metric and everybody understands it. Precision, Recall and F1 In some cases accuracy is not the best measure of model performance. For example, suppose we have an unbalanced dataset: there are only 1% of examples that are positive. Then a model which always predict negative is right in 99% cases, and hence will have accuracy of 0.99. But this model is not useful. There are alternatives to accuracy that can overcome this problem. Precision and Recall are among these metrics: they both look at the fraction of positive items that the model correctly recognized. They can be calculated using the Confusion Matrix: a table which summarizes the performance of a binary classifier: Precision is the fraction of correctly predicted positive items among all items the model predicted positive. In terms of the confusion matrix, Precision is TP / (TP + FP). Recall is the fraction of correctly predicted positive items among items that are actually positive. With values from the confusion matrix, Recall is TP / (TP + FN). It is often hard to decide whether one should optimize Precision or Recall. But there is another metric which combines both Precision and Recall into one number, and it is called F1 score. For calculating Precision and Recall, we first need to calculate the values for the cells of the confusion matrix: int tp = 0, tn = 0, fp = 0, fn = 0; for (int i = 0; i < actual.length; i++) { if (actual[i] == 1.0 && proba[i] > threshold) { tp++; } else if (actual[i] == 0.0 && proba[i] <= threshold) { tn++; } else if (actual[i] == 0.0 && proba[i] > threshold) { fp++; } else if (actual[i] == 1.0 && proba[i] <= threshold) { fn++; } } Then we can use the values to calculate Precision and Recall: double precision = 1.0 * tp / (tp + fp); double recall = 1.0 * tp / (tp + fn); Finally, F1 can be calculated using the following formula: double f1 = 2 * precision * recall / (precision + recall); ROC and AU ROC (AUC) The metrics above are good for binary classifiers which produce hard output: they only tell if the class should be assigned a positive label or negative. If instead our model outputs some score such that the higher the values of the score the more likely the item is to be positive, then the binary classifier is called a ranking classifier. Most of the models can output probabilities of belonging to a certain class, and we can use it to rank examples such that the positive are likely to come first. The ROC Curve visually tells us how good a ranking classifier separates positive examples from negative ones. The way a ROC curve is build is the following: We sort the observations by their score and then starting from the origin we go up if the observation is positive and right if it is negative. This way, in the ideal case, we first always go up, and then always go right – and this will result in the best possible ROC curve. In this case we can say that the separation between positive and negative examples is perfect. If the separation is not perfect, but still OK, the curve will go up for positive examples, but sometimes will turn right when a misclassification occurred. Finally, a bad classifier will not be able to tell positive and negative examples apart and the curve would alternate between going up and right. The diagonal line on the plot represents the baseline – the performance that a random classifier would achieve. The further away the curve from the baseline, the better. Unfortunately, there is no available easy-to-use implementation of ROC curves in Java. So the algorithm for drawing a ROC curve is the following: Let POS be number of positive labels, and NEG be the number of negative labels Order data by the score, decreasing Start from (0, 0) For each example in the sorted order, if the example is positive, move 1 / POS up in the graph, otherwise, move 1 / NEG right in the graph. This is a simplified algorithm and assumes that the scores are distinct. If the scores aren't distinct, and there are different actual labels for the same score, some adjustment needs to be made. It is implemented in the class RocCurve which you will find in the source code. You can use it as following: RocCurve.plot(actual, prediction); Calling it will create a plot similar to this one: The area under the curve says how good the separation is. If the separation is very good, then the area will be close to one. But if the classifier cannot distinguish between positive and negative examples, the curve will go around the random baseline curve, and the area will be close to 0.5. Area Under the Curve is often abbreviated as AUC, or, sometimes, AU ROC – to emphasize that the Curve is a ROC Curve. AUC has a very nice interpretation: the value of AUC corresponds to probability that a randomly selected positive example is scored higher than a randomly selected negative example. Naturally, if this probability is high, our classifier does a good job separating positive and negative examples. This makes AUC a to-go evaluation metric for many cases, especially when the dataset is unbalanced – in the sense that there are a lot more examples of one class than another. Luckily, there are implementations of AUC in Java. For example, it is implemented in Smile. You can use it like this: double[] predicted = ... // int[] truth = ... // double auc = AUC.measure(truth, predicted); Result Validation When learning from data there is always the danger of overfitting. Overfitting occurs when the model starts learning the noise in the data instead of detecting useful patterns. It is always important to check if a model overfits – otherwise it will not be useful when applied to unseen data. The typical and most practical way of checking whether a model overfits or not is to emulate “unseen data” – that is, take a part of the available labeled data and do not use it for training. This technique is called “hold out”: we hold out a part of the data and use it only for evaluation. Often we shuffle the original data set before splitting. In many cases we make a simplifying assumption that the order of data is not important – that is, one observation has no influence on another. In this case shuffling the data prior to splitting will remove effects that the order of items might have. On the other hand, if the data is a Time Series data, then shuffling it is not a good idea, because there is some dependence between observations. So, let us implement the hold out split. We assume that the data that we have is already represented by X – a two-dimensional array of doubles with features and y – a one-dimensional array of labels. First, let us create a helper class for holding the data: public class Dataset { private final double[][] X; private final double[] y; // constructor and getters are omitted } Splitting our dataset should produce two datasets, so let us create a class for that as well: public class Split { private final Dataset train; private final Dataset test; // constructor and getters are omitted } Now suppose we want to split the data into two parts: train and test. We also want to specify the size of the train set, we will do it using a testRatio parameter: the percentage of items that should go to the test set. So the first thing we do is generating an array with indexes and then splitting it according to testRatio: int[] indexes = IntStream.range(0, dataset.length()).toArray(); int trainSize = (int) (indexes.length * (1 - testRatio)); int[] trainIndex = Arrays.copyOfRange(indexes, 0, trainSize); int[] testIndex = Arrays.copyOfRange(indexes, trainSize, indexes.length); We can also shuffle the indexes if we want: Random rnd = new Random(seed); for (int i = indexes.length - 1; i > 0; i--) { int index = rnd.nextInt(i + 1); int tmp = indexes[index]; indexes[index] = indexes[i]; indexes[i] = tmp; } Then we can select instances for the training set as follows: int trainSize = trainIndex.length; double[][] trainX = new double[trainSize][]; double[] trainY = new double[trainSize]; for (int i = 0; i < trainSize; i++) { int idx = trainIndex[i]; trainX[i] = X[idx]; trainY[i] = y[idx]; } And then finally wrap it into our Dataset class: Dataset train = new Dataset(trainX, trainY); If we repeat the same for the test set, we can put both train and test sets into a Split object: Split split = new Split(train, test); And now we can use train fold for training and test fold for testing the models. If we put all the code above into a function of the Dataset class, for example, trainTestSplit, we can use it as follows: Split split = dataset.trainTestSplit(0.2); Dataset train = split.getTrain(); // train the model using train.getX() and train.getY() Dataset test = split.getTest(); // test the model using test.getX(); test.getY(); K-Fold Cross Validation Holding out only one part of the data may not always be the best option. What we can do instead is splitting it into K parts and then testing the models only on 1/Kth of the data. This is called K-Fold Cross-Validation: it not only gives the performance estimation, but also the possible spread of the error. Typically we are interested in models which give good and consistent performance. K-Fold Cross-Validation helps us to select such models. Then we prepare the data for K-Fold Cross-Validation is the following: First, split the data into K parts Then for each of these parts Take one part as the validation set Take the remaining K-1 parts as the training set If we translate this into Java, the first step will look like this: int[] indexes = IntStream.range(0, dataset.length()).toArray(); int[][] foldIndexes = new int[k][]; int step = indexes.length / k; int beginIndex = 0; for (int i = 0; i < k - 1; i++) { foldIndexes[i] = Arrays.copyOfRange(indexes, beginIndex, beginIndex + step); beginIndex = beginIndex + step; } foldIndexes[k - 1] = Arrays.copyOfRange(indexes, beginIndex, indexes.length); This creates an array of indexes for each fold. You can also shuffle the indexes array as previously. Now we can create splits from each fold: List<Split> result = new ArrayList<>(); for (int i = 0; i < k; i++) { int[] testIdx = folds[i]; int[] trainIdx = combineTrainFolds(folds, indexes.length, i); result.add(Split.fromIndexes(dataset, trainIdx, testIdx)); } In the code above we have two additional methods: combineTrainFolds K-1 arrays with indexes and combines them into one Split.fromIndexes creates a split gives train and test indexes. We have already covered the second function when we created a simple hold-out test set. And the first function, combineTrainFolds, is implemented like this: private static int[] combineTrainFolds(int[][] folds, int totalSize, int excludeIndex) { int size = totalSize - folds[excludeIndex].length; int result[] = new int[size]; int start = 0; for (int i = 0; i < folds.length; i++) { if (i == excludeIndex) { continue; } int[] fold = folds[i]; System.arraycopy(fold, 0, result, start, fold.length); start = start + fold.length; } return result; } Again, we can put the code above into a function of the Dataset class and call it like follows: List<Split> folds = train.kfold(3); Now when we have a list of Split objects, we can create a special function for performing Cross-Validation: public static DescriptiveStatistics crossValidate(List<Split> folds, Function<Dataset, Model> trainer) { double[] aucs = folds.parallelStream().mapToDouble(fold -> { Dataset foldTrain = fold.getTrain(); Dataset foldValidation = fold.getTest(); Model model = trainer.apply(foldTrain); return auc(model, foldValidation); }).toArray(); return new DescriptiveStatistics(aucs); } What this function does takes a list of folds and a callback which inside creates a model. Then, after the model is trained, we calculate AUC for it. Additionally, we take advantage of Java's ability to parallelize loops and train models on each fold at the same time. Finally, we put the AUCs calculated on each fold into a DescriptiveStatistics object, which can later on be used to return the mean and the standard deviation of the AUCs. As you probably remember, the DescriptiveStatistics class comes from the Apache Commons Math library. Let us consider an example. Suppose we want to use Logistic Regression from LIBLINEAR and select the best value for the regularization parameter C. We can use the function above this way: double[] Cs = { 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0 }; for (double C : Cs) { DescriptiveStatistics summary = crossValidate(folds, fold -> { Parameter param = new Parameter(SolverType.L1R_LR, C, 0.0001); return LibLinear.train(fold, param); }); double mean = summary.getMean(); double std = summary.getStandardDeviation(); System.out.printf("L1 logreg C=%7.3f, auc=%.4f ± %.4f%n", C, mean, std); } Here, LibLinear.train is a helper method which takes a Dataset object and a Parameter object and then trains a LIBLINEAR model. This will print AUC for all provided values of C, so you can see which one is the best, and pick the one with highest mean AUC. Training, Validation and Testing When doing the Cross-Validation there’s still a danger of overfitting. Since we try a lot of different experiments on the same validation set, we might accidentally pick the model which just happened to do well on the validation set – but it may later on fail to generalize to unseen data. The solution to this problem is to hold out a test set at the very beginning and do not touch it at all until we select what we think is the best model. And we use it only for evaluating the final model on it. So how do we select the best model? What we can do is to do Cross-Validation on the remaining train data. It can be either hold out or K-Fold Cross-Validation. In general you should prefer doing K-Fold Cross-Validation because it also gives you the spread of performance, and you may use it in for model selection as well. The typical workflow should be the following: (0) Select some metric for validation, e.g. accuracy or AUC. (1) Split all the data into train and test sets (2) Split the training data further and hold out a validation dataset or split it into K folds (3) Use the validation data for model selection and parameter optimization (4) Select the best model according to the validation set and evaluate it against the hold out test set It is important to avoid looking at the test set too often. It should be used only occasionally for final evaluation to make sure the selected model does not overfit. If the validation scheme is set up properly, the validation score should correspond to the final test score. If this happens, we can be sure that the model does not overfit and is able to generalize to unseen data. Using the classes and the code we created previously, it translates to the following Java code: Dataset data = new Dataset(X, y); Dataset train = split.getTrain(); List<Split> folds = train.kfold(3); // now use crossValidate(folds, ...) to select the best model Dataset test = split.getTest(); // do final evaluation of the best model on test With this information we are ready to do a project on Binary Classification. Summary In this article we spoke about supervised machine learning and about two common supervised problems: Classification and Regression. We also covered the libraries which are commonly used algorithms , implemented and how to evaluate the performance of these algorithms. There is another family of Machine Learning algorithms that do not require the label information: these methods are called Unsupervised Learning. Resources for Article: Further resources on this subject: Supervised Machine Learning [article] Clustering and Other Unsupervised Learning Methods [article] Data Clustering [article]
Read more
  • 0
  • 0
  • 5294

article-image-building-components-using-angular
Packt
06 Apr 2017
11 min read
Save for later

Building Components Using Angular

Packt
06 Apr 2017
11 min read
In this article by Shravan Kumar Kasagoni, the author of the book Angular UI Development, we will learn how to use new features of Angular framework to build web components. After going through this article you will understand the following: What are web components How to setup the project for Angular application development Data binding in Angular (For more resources related to this topic, see here.) Web components In today's web world if we need to use any of the UI components provided by libraries like jQuery UI, YUI library and so on. We write lot of imperative JavaScript code, we can't use them simply in declarative fashion like HTML markup. There are fundamental problems with the approach. There is no way to define custom HTML elements to use them in declarative fashion. The JavaScript, CSS code inside UI components can accidentally modify other parts of our web pages, our code can also accidentally modify UI components, which is unintended. There is no standard way to encapsulate code inside these UI components. Web Components provides solution to all these problems. Web Components are set of specifications for building reusable UI components. Web Components specifications is comprised of four parts: Templates: Allows us to declare fragments of HTML that can be cloned and inserted in the document by script Shadow DOM: Solves DOM tree encapsulation problem Custom elements: Allows us to define custom HTML tags for UI components HTML imports: Allows to us add UI components to web page using import statement More information on web components can be found at: https://www.w3.org/TR/components-intro/. Component are the fundament building blocks of any Angular application. Components in Angular are built on top of the web components specification. Web components specification is still under development and might change in future, not all browsers supports it. But Angular provides very high abstraction so that we don't need to deal with multiple technologies in web components. Even if specification changes Angular can take care of internally, it provides much simpler API to write web components. Getting started with Angular We know Angular is completely re-written from scratch, so everything is new in Angular. In this article we will discuss few important features like data binding, new templating syntax and built-in directives. We are going use more practical approach to learn these new features. In the next section we are going to look at the partially implemented Angular application. We will incrementally use Angular new features to implement this application. Follow the instruction specified in next section to setup sample application. Project Setup Here is a sample application with required Angular configuration and some sample code. Application Structure Create directory structure, files as mentioned below and copy the code into files from next section. Source Code package.json We are going to use npm as our package manager to download libraries and packages required for our application development. Copy the following code to package.json file. { "name": "display-data", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "tsc": "tsc", "tsc:w": "tsc -w", "lite": "lite-server", "start": "concurrent "npm run tsc:w" "npm run lite" " }, "author": "Shravan", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.1", "es6-promise": "^3.0.2", "es6-shim": "^0.33.13", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.0", "systemjs": "^0.19.14", "zone.js": "^0.5.10" }, "devDependencies": { "concurrently": "^1.0.0", "lite-server": "^1.3.2", "typescript": "^1.7.5" } } The package.json file holds metadata for npm, in the preceding code snippet there are two important sections: dependencies: It holds all the packages required for an application to run devDependencies: It holds all the packages required only for development Once we add the preceding package.json file to our project we should run the following command at the root of our application. $ npm install The preceding command will create node_modules directory in the root of project and downloads all the packages mentioned in dependencies, devDependencies sections into node_modules directory. There is one more important section, that is scripts. We will discuss about scripts section, when we are ready to run our application. tsconfig.json Copy the below code to tsconfig.json file. { "compilerOptions": { "target": "es5", "module": "system", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false }, "exclude": [ "node_modules" ] } We are going to use TypeScript for developing our Angular applications. The tsconfig.json file is the configuration file for TypeScript compiler. Options specified in this file are used while transpiling our code into JavaScript. This is totally optional, if we don't use it TypeScript compiler use are all default flags during compilation. But this is the best way to pass the flags to TypeScript compiler. Following is the expiation for each flag specified in tsconfig.json: target: Specifies ECMAScript target version: 'ES3' (default), 'ES5', or 'ES6' module: Specifies module code generation: 'commonjs', 'amd', 'system', 'umd' or 'es6' moduleResolution: Specifies module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6) sourceMap: If true generates corresponding '.map' file for .js file emitDecoratorMetadata: If true enables the output JavaScript to create the metadata for the decorators experimentalDecorators: If true enables experimental support for ES7 decorators removeComments: If true, removes comments from output JavaScript files noImplicitAny: If true raise error if we use 'any' type on expressions and declarations exclude: If specified, the compiler will not compile the TypeScript files in the containing directory and subdirectories index.html Copy the following code to index.html file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Top 10 Fastest Cars in the World</title> <link rel="stylesheet" href="app/site.css"> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <script> System.config({ transpiler: 'typescript', typescriptOptions: {emitDecoratorMetadata: true}, map: {typescript: 'node_modules/typescript/lib/typescript.js'}, packages: { 'app' : { defaultExtension: 'ts' } } }); System.import('app/boot').then(null, console.error.bind(console)); </script> </head> <body> <cars-list>Loading...</cars-list> </body> </html> This is startup page of our application, it contains required angular scripts, SystemJS configuration for module loading. Body tag contains <cars-list> tag which renders the root component of our application. However, I want to point out one specific statement: The System.import('app/boot') statement will import boot module from app package. Physically it loading boot.js file under app folder. car.ts Copy the following code to car.ts file. export interface Car { make: string; model: string; speed: number; } We are defining a car model using TypeScript interface, we are going to use this car model object in our components. app.component.ts Copy the following code to app.component.ts file. import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } Important points about AppComponent class: The AppComponent class is our application root component, it has one public property named 'heading' The AppComponent class is decorated with @Component() function with selector, template properties in its configuration object The @Component() function is imported using ES2015 module import syntax from 'angular2/core' module in Angular library We are also exporting the AppComponent class as module using export keyword Other modules in application can also import the AppComponent class using module name (app.component – file name without extension) using ES2015 module import syntax boot.ts Copy the following code to boot.ts file. import {bootstrap} from 'angular2/platform/browser' import {AppComponent} from './app.component'; bootstrap(AppComponent); In this file we are importing bootstrap() function from 'angular2/platform/browser' module and the AppComponent class from 'app.component' module. Next we are invoking bootstrap() function with the AppComponent class as parameter, this will instantiate an Angular application with the AppComponent as root component. site.css Copy the following code to site.css file. * { font-family: 'Segoe UI Light', 'Helvetica Neue', 'Segoe UI', 'Segoe'; color: rgb(51, 51, 51); } This file contains some basic styles for our application. Working with data in Angular In any typical web application, we need to display data on a HTML page and read the data from input controls on a HTML page. In Angular everything is a component, HTML page is represented as template and it is always associated with a component class. Application data lives on component's class properties. Either push values to template or pull values from template, to do this we need to bind the properties of component class to the controls on the template. This mechanism is known as data binding. Data binding in angular allows us to use simple syntax to push or pull data. When we bind the properties of component class to the controls on the template, if the data on the properties changes, Angular will automatically update the template to display the latest data and vice versa. We can also control the direction of data flow (from component to template, from template to component). Displaying Data using Interpolation If we go back to our AppComponent class in sample application, we have heading property. We need to display this heading property on the template. Here is the revised AppComponent class: app/app.component.ts import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '<h1>{{heading}}</h1>' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } In @Component() function we updated template property with expression {{heading}} surrounded by h1 tag. The double curly braces are the interpolation syntax in Angular. Any property on the class we need to display on the template, use the property name surrounded by double curly braces. Angular will automatically render the value of property on the browser screen. Let's run our application, go to command line and navigate to the root of the application structure, then run the following command. $ npm start The preceding start command is part of scripts section in package.json file. It is invoking two other commands npm run tsc:w, npm run lite. npm run tsc:w: This command is performing the following actions: It is invoking TypeScript compiler in watch mode TypeScript compiler will compile all our TypeScript files to JavaScript using configuration mentioned in tsconfig.json TypeScript compiler will not exit after the compilation is over, it will wait for changes in TypeScript files Whenever we modify any TypeScript file, on the fly compiler will compile them to JavaScript npm run lite: This command will start a lite weight Node.js web server and launches our application in browser Now we can continue to make the changes in our application. Changes are detected and browser will refresh automatically with updates. Output in the browser: Let's further extend this simple application, we are going to bind the heading property to a textbox, here is revised template: template: ` <h1>{{heading}}</h1> <input type="text" value="{{heading}}"/> ` If we notice the template it is a multiline string and it is surrounded by ` (backquote/ backtick) symbols instead of single or double quotes. The backtick (``) symbols are new multi-line string syntax in ECMAScript 2015. We don't need start our application again, as mentioned earlier it will automatically refresh the browser with updated output until we stop 'npm start' command is at command line. Output in the browser: Now textbox also displaying the same value in heading property. Let's change the value in textbox by typing something, then hit the tab button. We don't see any changes happening on the browser. But as mentioned earlier in data binding whenever we change the value of any control on the template, which is bind to a property of component class it should update the property value. Then any other controls bind to same property should also display the updated value. In browser h1 tag should also display the same text whatever we type in textbox, but it won't happen. Summary We started this article by covering introduction to web components. Next we discussed a sample application which is the foundation for this article. Then we discussed how to write components using new features in Angular to like data binding and new templating syntaxes using lot of examples. By the end of this article, you should have good understanding of Angular new concepts and should be able to write basic components. Resources for Article: Further resources on this subject: Get Familiar with Angular [article] Gearing Up for Bootstrap 4 [article] Angular's component architecture [article]
Read more
  • 0
  • 0
  • 34866

article-image-introduction-network-security
Packt
06 Apr 2017
18 min read
Save for later

Introduction to Network Security

Packt
06 Apr 2017
18 min read
In this article by Warun Levesque, Michael McLafferty, and Arthur Salmon, the authors of the book Applied Network Security, we will be covering the following topics, which will give an introduction to network security: Murphy's law The definition of a hacker and the types The hacking process Recent events and statistics of network attacks Security for individual versus company Mitigation against threats Building an assessment This world is changing rapidly with advancing network technologies. Unfortunately, sometimes the convenience of technology can outpace its security and safety. Technologies like the Internet of things is ushering in a new era of network communication. We also want to change the mindset of being in the field of network security. Most current cyber security professionals practice defensive and passive security. They mostly focus on mitigation and forensic tactics to analyze the aftermath of an attack. We want to change this mindset to one of Offensive security. This article will give insight on how a hacker thinks and what methods they use. Having knowledge of a hacker's tactics, will give the reader a great advantage in protecting any network from attack. (For more resources related to this topic, see here.) Murphy's law Network security is much like Murphy's law in the sense that, if something can go wrong it will go wrong. To be successful at understanding and applying network security, a person must master the three Ps. The three Ps are, persistence, patience, and passion. A cyber security professional must be persistent in their pursuit of a solution to a problem. Giving up is not an option. The answer will be there; it just may take more time than expected to find it. Having patience is also an important trait to master. When dealing with network anomalies, it is very easy to get frustrated. Taking a deep breath and keeping a cool head goes a long way in finding the correct solution to your network security problems. Finally, developing a passion for cyber security is critical to being a successful network security professional. Having that passion will drive you to learn more and evolve yourself on a daily basis to be better. Once you learn, then you will improve and perhaps go on to inspire others to reach similar aspirations in cyber security. The definition of a hacker and the types A hacker is a person who uses computers to gain unauthorized access to data. There are many different types of hackers. There are white hat, grey hat, and black hat hackers. Some hackers are defined for their intention. For example, a hacker that attacks for political reasons may be known as a hacktivist. A white hat hackers have no criminal intent, but instead focuses on finding and fixing network vulnerabilities. Often companies will hire a white hat hacker to test the security of their network for vulnerabilities. A grey hat hacker is someone who may have criminal intent but not often for personal gain. Often a grey hat will seek to expose a network vulnerability without the permission from the owner of the network. A black hat hacker is purely criminal. They sole objective is personal gain. Black hat hackers take advantage of network vulnerabilities anyway they can for maximum benefit. A cyber-criminal is another type of black hat hacker, who is motivated to attack for illegal financial gain. A more basic type of hacker is known as a script kiddie. A script kiddie is a person who knows how to use basic hacking tools but doesn't understand how they work. They often lack the knowledge to launch any kind of real attack, but can still cause problems on a poorly protected network. Hackers tools There are a range of many different hacking tools. A tool like Nmap for example, is a great tool for both reconnaissance and scanning for network vulnerabilities. Some tools are grouped together to make toolkits and frameworks, such as the Social Engineering Toolkit and Metasploit framework. The Metasploit framework is one of the most versatile and supported hacking tool frameworks available. Metasploit is built around a collection of highly effective modules, such as MSFvenom and provides access to an extensive database of exploits and vulnerabilities. There are also physical hacking tools. Devices like the Rubber Ducky and Wi-Fi Pineapple are good examples. The Rubber Ducky is a usb payload injector, that automatically injects a malicious virus into the device it's plugged into. The Wi-Fi Pineapple can act as a rogue router and be used to launch man in the middle attacks. The Wi-Fi Pineapple also has a range of modules that allow it to execute multiple attack vectors. These types of tools are known as penetration testing equipment. The hacking process There are five main phases to the hacking process: Reconnaissance: The reconnaissance phase is often the most time consuming. This phase can last days, weeks, or even months sometimes depending on the target. The objective during the reconnaissance phase is to learn as much as possible about the potential target. Scanning: In this phase the hacker will scan for vulnerabilities in the network to exploit. These scans will look for weaknesses such as, open ports, open services, outdated applications (including operating systems), and the type of equipment being used on the network. Access: In this phase the hacker will use the knowledge gained in the previous phases to gain access to sensitive data or use the network to attack other targets. The objective of this phase is to have the attacker gain some level of control over other devices on the network. Maintaining access: During this phase a hacker will look at various options, such as creating a backdoor to maintain access to devices they have compromised. By creating a backdoor, a hacker can maintain a persistent attack on a network, without fear of losing access to the devices they have gained control over. Although when a backdoor is created, it increases the chance of a hacker being discovered. Backdoors are noisy and often leave a large footprint for IDS to follow. Covering your tracks: This phase is about hiding the intrusion of the network by the hacker as to not alert any IDS that may be monitoring the network. The objective of this phase is to erase any trace that an attack occurred on the network. Recent events and statistics of network attacks The news has been full of cyber-attacks in recent years. The number and scale of attacks are increasing at an alarming rate. It is important for anyone in network security to study these attacks. Staying current with this kind of information will help in defending your network from similar attacks. Since 2015, the medical and insurance industry have been heavily targeted for cyber-attacks. On May 5th, 2015 Premera Blue Cross was attacked. This attack is said to have compromised at least 11 million customer accounts containing personal data. The attack exposed customer's names, birth dates, social security numbers, phone numbers, bank account information, mailing, and e-mail addresses. Another attack that was on a larger scale, was the attack on Anthem. It is estimated that 80 million personal data records were stolen from customers, employees, and even the Chief Executive Officer of Anthem. Another more infamous cyber-attack recently was the Sony hack. This hack was a little different than the Anthem and Blue Cross attacks, because it was carried out by hacktivist instead of cyber-criminals. Even though both types of hacking are criminal, the fundamental reasoning and objectives of the attacks are quite different. The objective in the Sony attack was to disrupt and embarrass the executives at Sony as well as prevent a film from being released. No financial data was targeted. Instead the hackers went after personal e-mails of top executives. The hackers then released the e-mails to the public, causing humiliation to Sony and its executives. Many apologies were issued by Sony in the following weeks of the attack. Large commercial retailers have also been a favorite target for hackers. An attack occurred against Home Depot in September 2014. That attack was on a large scale. It is estimated that over 56 million credit cards were compromised during the Home Depot attack. A similar attack but on a smaller scale was carried out against Staples in October 2014. During this attack, over 1.4 million credit card numbers were stolen. The statistics on cyber security attacks are eye opening. It is estimated by some experts that cybercrime has a worldwide cost of 110 billion dollars a year. In a given year, over 15 million Americans will have their identified stolen through cyber-attacks, it is also estimated that 1.5 million people fall victim to cybercrime every day. These statistics are rapidly increasing and will continue to do so until more people take an active interest in network security. Our defense The baseline for preventing a potential security issues typically begins with hardening the security infrastructure, including firewalls, DMZ, and physical security platform. Entrusting only valid sources or individuals with personal data and or access to that data. That also includes being compliant with all regulations that apply to a given situation or business. Being aware of the types of breaches as well as your potential vulnerabilities. Also understanding is an individual or an organization a higher risk target for attacks. The question has to be asked, does one's organization promote security? This is done both at the personal and the business level to deter cyber-attacks? After a decade of responding to incidents and helping customers recover from and increase their resilience against breaches. Organization may already have a security training and awareness (STA) program, or other training and program could have existed. As the security and threat landscape evolves organizations and individuals need to continually evaluate practices that are required and appropriate for the data they collect, transmit, retain, and destroy. Encryption of data at rest/in storage and in transit is a fundamental security requirement and the respective failure is frequently being cited as the cause for regulatory action and lawsuits. Enforce effective password management policies. Least privilege user access (LUA) is a core security strategy component, and all accounts should run with as few privileges and access levels as possible. Conduct regular security design and code reviews including penetration tests and vulnerability scans to identify and mitigate vulnerabilities. Require e-mail authentication on all inbound and outbound mail servers to help detect malicious e-mail including spear phishing and spoofed e-mail. Continuously monitor in real-time the security of your organization's infrastructure including collecting and analyzing all network traffic, and analyzing centralized logs (including firewall, IDS/IPS, VPN, and AV) using log management tools, as well as reviewing network statistics. Identify anomalous activity, investigate, and revise your view of anomalous activity accordingly. User training would be the biggest challenge, but is arguably the most important defense. Security for individual versus company One of the fundamental questions individuals need to ask themselves, is there a difference between them the individual and an organization? Individual security is less likely due to attack service area. However, there are tools and sites on the internet that can be utilized to detect and mitigate data breaches for both.https://haveibeenpwned.com/ or http://map.norsecorp.com/ are good sites to start with. The issue is that individuals believe they are not a target because there is little to gain from attacking individuals, but in truth everyone has the ability to become a target. Wi-Fi vulnerabilities Protecting wireless networks can be very challenging at times. There are many vulnerabilities that a hacker can exploit to compromise a wireless network. One of the basic Wi-Fi vulnerabilities is broadcasting the Service Set Identifier (SSID) of your wireless network. Broadcasting the SSID makes the wireless network easier to find and target. Another vulnerability in Wi-Fi networks is using Media Access Control (MAC) addressesfor network authentication. A hacker can easily spoof or mimic a trusted MAC address to gain access to the network. Using weak encryption such as Wired Equivalent Privacy (WEP) will make your network an easy target for attack. There are many hacking tools available to crack any WEP key in under five minutes. A major physical vulnerability in wireless networks are the Access Points (AP). Sometimes APs will be placed in poor locations that can be easily accessed by a hacker. A hacker may install what is called a rogue AP. This rogue AP will monitor the network for data that a hacker can use to escalate their attack. Often this tactic is used to harvest the credentials of high ranking management personnel, to gain access to encrypted databases that contain the personal/financial data of employees and customers or both. Peer-to-peer technology can also be a vulnerability for wireless networks. A hacker may gain access to a wireless network by using a legitimate user as an accepted entry point. Not using and enforcing security policies is also a major vulnerability found in wireless networks. Using security tools like Active Directory (deployed properly) will make it harder for a hacker to gain access to a network. Hackers will often go after low hanging fruit (easy targets), so having at least some deterrence will go a long way in protecting your wireless network. Using Intrusion Detection Systems (IDS) in combination with Active Directory will immensely increase the defense of any wireless network. Although the most effective factor is, having a well-trained and informed cyber security professional watching over the network. The more a cyber security professional (threat hunter) understands the tactics of a hacker, the more effective that Threat hunter will become in discovering and neutralizing a network attack. Although there are many challenges in protecting a wireless network, with the proper planning and deployment those challenges can be overcome. Knowns and unknowns The toughest thing about an unknown risk to security is that they are unknown. Unless they are found they can stay hidden. A common practice to determine an unknown risk would be to identify all the known risks and attempt to mitigate them as best as possible. There are many sites available that can assist in this venture. The most helpful would be reports from CVE sites that identify vulnerabilities. False positives   Positive Negative True TP: correctly identified TN: correctly rejected False FP: incorrectly identified FN: incorrectly rejected As it related to detection for an analyzed event there are four situations that exist in this context, corresponding to the relation between the result of the detection for an analyzed event. In this case, each of the corresponding situations mentioned in the preceding table are outlined as follows: True positive (TP): It is when the analyzed event is correctly classified as intrusion or as harmful/malicious. For example, a network security administrator enters their credentials into the Active Directory server and is granted administrator access. True negative (TN): It is when the analyzed event is correctly classified and correctly rejected. For example, an attacker uses a port like 4444 to communicate with a victim's device. An intrusion detection system detects network traffic on the authorized port and alerts the cyber security team to this potential malicious activity. The cyber security team quickly closes the port and isolates the infected device from the network. False positive (FP): It is when the analyzed event is innocuous or otherwise clean as it relates to perspective of security, however, the system classifies it as malicious or harmful. For example, a user types their password into a website's login text field. Instead of being granted access, the user is flagged for an SQL injection attempt by input sanitation. This is often caused when input sanitation is misconfigured. False negative (FN): It is when the analyzed event is malicious but it is classified as normal/innocuous. For example, an attacker inputs an SQL injection string into a text field found on a website to gain unauthorized access to database information. The website accepts the SQL injection as normal user behavior and grants access to the attacker. As it relates to detection, having systems correctly identify the given situations in paramount. Mitigation against threats There are many threats that a network faces. New network threats are emerging all the time. As a network security, professional, it would be wise to have a good understanding of effective mitigation techniques. For example, a hacker using a packet sniffer can be mitigated by only allowing the network admin to run a network analyzer (packet sniffer) on the network. A packet sniffer can usually detect another packet sniffer on the network right away. Although, there are ways a knowledgeable hacker can disguise the packet sniffer as another piece of software. A hacker will not usually go to such lengths unless it is a highly-secured target. It is alarming that; most businesses do not properly monitor their network or even at all. It is important for any business to have a business continuity/disaster recovery plan. This plan is intended to allow a business to continue to operate and recover from a serious network attack. The most common deployment of the continuity/disaster recovery plan is after a DDoS attack. A DDoS attack could potentially cost a business or organization millions of dollars is lost revenue and productivity. One of the most effective and hardest to mitigate attacks is social engineering. All the most devastating network attacks have begun with some type of social engineering attack. One good example is the hack against Snapchat on February 26th, 2016. "Last Friday, Snapchat's payroll department was targeted by an isolated e-mail phishing scam in which a scammer impersonated our Chief Executive Officer and asked for employee payroll information," Snapchat explained in a blog post. Unfortunately, the phishing e-mail wasn't recognized for what it was — a scam — and payroll information about some current and former employees was disclosed externally. Socially engineered phishing e-mails, same as the one that affected Snapchat are common attack vectors for hackers. The one difference between phishing e-mails from a few years ago, and the ones in 2016 is, the level of social engineering hackers are putting into the e-mails. The Snapchat HR phishing e-mail, indicated a high level of reconnaissance on the Chief Executive Officer of Snapchat. This reconnaissance most likely took months. This level of detail and targeting of an individual (Chief Executive Officer) is more accurately know as a spear-phishing e-mail. Spear-phishing campaigns go after one individual (fish) compared to phishing campaigns that are more general and may be sent to millions of users (fish). It is like casting a big open net into the water and seeing what comes back. The only real way to mitigate against social engineering attacks is training and building awareness among users. By properly training the users that access the network, it will create a higher level of awareness against socially engineered attacks. Building an assessment Creating a network assessment is an important aspect of network security. A network assessment will allow for a better understanding of where vulnerabilities may be found within the network. It is important to know precisely what you are doing during a network assessment. If the assessment is done wrong, you could cause great harm to the network you are trying to protect. Before you start the network assessment, you should determine the objectives of the assessment itself. Are you trying to identify if the network has any open ports that shouldn't be? Is your objective to quantify how much traffic flows through the network at any given time or a specific time? Once you decide on the objectives of the network assessment, you will then be able to choose the type of tools you will use. Network assessment tools are often known as penetration testing tools. A person who employs these tools is known as a penetration tester or pen tester. These tools are designed to find and exploit network vulnerabilities, so that they can be fixed before a real attack occurs. That is why it is important to know what you are doing when using penetration testing tools during an assessment. Sometimes network assessments require a team. It is important to have an accurate idea of the scale of the network before you pick your team. In a large enterprise network, it can be easy to become overwhelmed by tasks to complete without enough support. Once the scale of the network assessment is complete, the next step would be to ensure you have written permission and scope from management. All parties involved in the network assessment must be clear on what can and cannot be done to the network during the assessment. After the assessment is completed, the last step is creating a report to educate concerned parties of the findings. Providing detailed information and solutions to vulnerabilities will help keep the network up to date on defense. The report will also be able to determine if there are any viruses lying dormant, waiting for the opportune time to attack the network. Network assessments should be conducted routinely and frequently to help ensure strong networksecurity. Summary In this article we covered the fundamentals of network security. It began by explaining the importance of having network security and what should be done to secure the network. It also covered the different ways physical security can be applied. The importance of having security policies in place and wireless security was discussed. This article also spoke about wireless security policies and why they are important. Resources for Article: Further resources on this subject: API and Intent-Driven Networking [article] Deploying First Server [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 23871

article-image-systems-and-logics
Packt
06 Apr 2017
19 min read
Save for later

Systems and Logics

Packt
06 Apr 2017
19 min read
In this article by Priya Kuber, Rishi Gaurav Bhatnagar, and Vijay Varada, authors of the book Arduino for Kids explains structure and various components of a code: How does a code work What is code What is a System How to download, save and access a file in the Arduino IDE (For more resources related to this topic, see here.) What is a System? Imagine system as a box which in which a process is completed. Every system is solving a larger problem, and can be broken down into smaller problems that can be solved and assembled. Sort of like a Lego set! Each small process has 'logic' as the backbone of the solution. Logic, can be expressed as an algorithm and implemented in code. You can design a system to arrive at solutions to a problem. Another advantage to breaking down a system into small processes is that in case your solution fails to work, you can easily spot the source of your problem, by checking if your individual processes work. What is Code? Code is a simple set of written instructions, given to a specific program in a computer, to perform a desired task. Code is written in a computer language. As we all know by now, a computer is an intelligent, electronic device capable of solving logical problems with a given set of instructions. Some examples of computer languages are Python, Ruby, C, C++ and so on. Find out some more examples of languages from the internet and write it down in your notebook. What is an Algorithm? A logical set by step process, guided by the boundaries (or constraints) defined by a problem, followed to find a solution is called an algorithm. In a better and more pictorial form, it can be represented as follows: (Logic + Control = Algorithm) (A picture depicting this equation) What does that even mean? Look at the following example to understand the process. Let's understand what an algorithm means with the help of an example. It's your friend's birthday and you have been invited for the party (Isn't this exciting already?). You decide to gift her something. Since it's a gift, let's wrap it. What would you do to wrap the gift? How would you do it? Look at the size of the gift Fetch the gift wrapping paper Fetch the scissors Fetch the tape Then you would proceed to place the gift inside the wrapping paper. You will start start folding the corners in a way that it efficiently covers the Gift. In the meanwhile, to make sure that your wrapping is tight, you would use a scotch tape. You keep working on the wrapper till the whole gift is covered (and mind you, neatly! you don't want mommy scolding you, right?). What did you just do? You used a logical step by step process to solve a simple task given to you. Again coming back to the sentence: (Logic + Control = Algorithm) 'Logic' here, is the set of instructions given to a computer to solve the problem. 'Control' are the words making sure that the computer understands all your boundaries. Logic Logic is the study of reasoning and when we add this to the control structures, they become algorithms. Have you ever watered the plants using a water pipe or washed a car with it? How do you think it works? The pipe guides the water from the water tap to the car. It makes sure optimum amount of water reaches the end of the pipe. A pipe is a control structure for water in this case. We will understand more about control structures in the next topic. How does a control structure work? A very good example to understand how a control structure works, is taken from wikiversity. (https://en.wikiversity.org/wiki/Control_structures) A precondition is the state of a variable before entering a control structure. In the gift wrapping example, the size of the gift determines the amount of gift wrapping paper you will use. Hence, it is a condition that you need to follow to successfully finish the task. In programming terms, such condition is called precondition. Similarly, a post condition is the state of the variable after exiting the control structure. And a variable, in code, is an alphabetic character, or a set of alphabetic characters, representing or storing a number, or a value. Some examples of variables are x, y, z, a, b, c, kitten, dog, robot Let us analyze flow control by using traffic flow as a model. A vehicle is arriving at an intersection. Thus, the precondition is the vehicle is in motion. Suppose the traffic light at the intersection is red. The control structure must determine the proper course of action to assign to the vehicle. Precondition: The vehicle is in motion. Control Structure: Is the traffic light green? If so, then the vehicle may stay in motion. Is the traffic light red? If so, then the vehicle must stop. End of Control Structure: Post condition: The vehicle comes to a stop. Thus, upon exiting the control structure, the vehicle is stopped. If you wonder where you learnt to wrap the gift, you would know that you learnt it by observing other people doing a similar task through your eyes. Since our microcontroller does not have eyes, we need to teach it to have a logical thinking using Code. The series of logical steps that lead to a solution is called algorithm as we saw in the previous task. Hence, all the instructions we give to a micro controller are in the form of an algorithm. A good algorithm solves the problem in a fast and efficient way. Blocks of small algorithms form larger algorithms. But algorithm is just code! What will happen when you try to add sensors to your code? A combination of electronics and code can be called a system. Picture: (block diagram of sensors + Arduino + code written in a dialogue box ) Logic is universal. Just like there can be multiple ways to fold the wrapping paper, there can be multiple ways to solve a problem too! A micro controller takes the instructions only in certain languages. The instructions then go to a compiler that translates the code that we have written to the machine. What language does your Arduino Understand? For Arduino, we will use the language 'processing'. Quoting from processing.org, Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Processing is an open source programming language and integrated development environment (IDE). Processing was originally built for designers and it was extensively used in electronics arts and visual design communities with the sole purpose of teaching the fundamentals of computer sciences in a visual context. This also served as the foundations of electronic sketchbooks. From the previous example of gift wrapping, you noticed that before you need to bring in the paper and other stationery needed, you had to see the size of the problem at hand (the gift). What is a Library? In computer language, the stationery needed to complete your task, is called "Library". A library is a collection of reusable code that a programmer can 'call' instead of writing everything again. Now imagine if you had to cut a tree, make paper, then color the paper into the beautiful wrapping paper that you used, when I asked you to wrap the gift. How tiresome would it be? (If you are inventing a new type of paper, sure, go ahead chop some wood!) So before writing a program, you make sure that you have 'called' all the right libraries. Can you search the internet and make a note of a few arduino libraries in your inventor's diary? Please remember, that libraries are also made up of code! As your next activity, we will together learn more about how a library is created. Activity: Understanding the Morse Code During the times before the two-way mobile communication, people used a one-way communication called the Morse code. The following image is the experimental setup of a Morse code. Do not worry; we will not get into how you will perform it physically, but by this example, you will understand how your Arduino will work. We will show you the bigger picture first and then dissect it systematically so that you understand what a code contains. The Morse code is made up of two components "Short" and "Long" signals. The signals could be in the form of a light pulse or sound. The following image shows how the Morse code looks like. A dot is a short signal and a dash is a long signal. Interesting, right? Try encrypting your message for your friend with this dots and dashes. For example, "Hello" would be: The image below shows how the Arduino code for Morse code will looks like. The piece of code in dots and dashes is the message SOS that I am sure you all know, is an urgent appeal for help. SOS in Morse goes: dot dot dot; dash dash dash; dot dot dot. Since this is a library, which is being created using dots and dashes, it is important that we define how the dot becomes dot, and dash becomes dash first. The following sections will take smaller sections or pieces of main code and explain you how they work. We will also introduce some interesting concepts using the same. What is a function? Functions have instructions in a single line of code, telling the values in the bracket how to act. Let us see which one is the function in our code. Can you try to guess from the following screenshot? No? Let me help you! digitalWrite() in the above code is a Function, that as you understand, 'writes' on the correct pin of the Arduino. delay is a Function that tells the controller how frequently it should send the message. The higher the delay number, the slower will be the message (Imagine it as a way to slow down your friend who speaks too fast, helping you to understand him better!) Look up the internet to find out what is the maximum number that you stuff into delay. What is a constant? A constant is an identifier with pre-defined, non-changeable values. What is an identifier you ask? An identifier is a name that labels the identity of a unique object or value. As you can see from the above piece of code, HIGH and LOW are Constants. Q: What is the opposite of Constant? Ans: Variable The above food for thought brings us to the next section. What is a variable? A variable is a symbolic name for information. In plain English, a 'teacher' can have any name; hence, the 'teacher' could be a variable. A variable is used to store a piece of information temporarily. The value of a variable changes, if any action is taken on it for example; Add, subtract, multiply etc. (Imagine how your teacher praises you when you complete your assignment on time and scolds you when you do not!) What is a Datatype? Datatypes are sets of data that have a pre-defined value. Now look at the first block of the example program in the following image: int as shown in the above screenshot, is a Datatype The following table shows some of the examples of a Datatype. Datatype Use Example int describes an integer number is used to represent whole numbers 1, 2, 13, 99 etc float used to represent that the numbers are decimal 0.66, 1.73 etc char represents any character. Strings are written in single quotes 'A', 65 etc str represent string "This is a good day!" With the above definition, can we recognize what pinMode is? Every time you have a doubt in a command or you want to learn more about it, you can always look it up at Arduino.cc website. You could do the same for digitalWrite() as well! From the pinMode page of Arduino.cc we can define it as a command that configures the specified pin to behave either as an input or an output. Let us now see something more interesting. What is a control structure? We have already seen the working of a control structure. In this section, we will be more specific to our code. Now I draw your attention towards this specific block from the main example above: Do you see void setup() followed by a code in the brackets? Similarly void loop() ? These make the basics of the structure of an Arduino program sketch. A structure, holds the program together, and helps the compiler to make sense of the commands entered. A compiler is a program that turns code understood by humans into the code that is understood by machines. There are other loop and control structures as you can see in the following screenshot: These control structures are explained next. How do you use Control Structures? Imagine you are teaching your friend to build 6 cm high lego wall. You ask her to place one layer of lego bricks, and then you further ask her to place another layer of lego bricks on top of the bottom layer. You ask her to repeat the process until the wall is 6 cm high. This process of repeating instructions until a desired result is achieved is called Loop. A micro-controller is only as smart as you program it to be. Hence, we will move on to the different types of loops. While loop: Like the name suggests, it repeats a statement (or group of statements) while the given condition is true. The condition is tested before executing the loop body. For loop: Execute a sequence of statements multiple times and abbreviates the code that manages the loop variable. Do while loop: Like a while statement, except that it tests the condition at the end of the loop body Nested loop: You can use one or more loop inside any another while, for or do..while loop. Now you were able to successfully tell your friend when to stop, but how to control the micro controller? Do not worry, the magic is on its way! You introduce Control statements. Break statements: Breaks the flow of the loop or switch statement and transfers execution to the statement that is immediately following the loop or switch. Continue statements: This statement causes the loop to skip the remainder of its body and immediately retest its condition before reiterating. Goto statements: This transfers control to a statement which is labeled . It is no advised to use goto statement in your programs. Quiz time: What is an infinite loop? Look up the internet and note it in your inventor-notebook. The Arduino IDE The full form of IDE is Integrated Development Environment. IDE uses a Compiler to translate code in a simple language that the computer understands. Compiler is the program that reads all your code and translates your instructions to your microcontroller. In case of the Arduino IDE, it also verifies if your code is making sense to it or not. Arduino IDE is like your friend who helps you finish your homework, reviews it before you give it for submission, if there are any errors; it helps you identify them and resolve them. Why should you love the Arduino IDE? I am sure by now things look too technical. You have been introduced to SO many new terms to learn and understand. The important thing here is not to forget to have fun while learning. Understanding how the IDE works is very useful when you are trying to modify or write your own code. If you make a mistake, it would tell you which line is giving you trouble. Isn't it cool? The Arduino IDE also comes with loads of cool examples that you can plug-and-play. It also has a long list of libraries for you to access. Now let us learn how to get the library on to your computer. Ask an adult to help you with this section if you are unable to succeed. Make a note of the following answers in your inventor's notebook before downloading the IDE. Get your answers from google or ask an adult. What is an operating system?What is the name of the operating system running on your computer?What is the version of your current operating system?Is your operating system 32 bit or 64 bit?What is the name of the Arduino board that you have? Now that we did our homework, let us start playing! How to download the IDE? Let us now, go further and understand how to download something that's going to be our playground. I am sure you'd be eager to see the place you'll be working in for building new and interesting stuff! For those of you wanting to learn and do everything my themselves, open any browser and search for "Arduino IDE" followed by the name of your operating system with "32 bits" or "64 bits" as learnt in the previous section. Click to download the latest version and install! Else, the step-by-step instructions are here: Open your browser (Firefox, Chrome, Safari)(Insert image of the logos of firefox, chrome and safari) Go to www.arduino.ccas shown in the following screenshot Click on the 'Download' section of the homepage, which is the third option from your left as shown in the following screenshot. From the options, locate the name of your operating system, click on the right version (32 bits or 64 bits) Then click on 'Just Download' after the new page appears. After clicking on the desired link and saving the files, you should be able to 'double click' on the Arduino icon and install the software. If you have managed to install successfully, you should see the following screens. If not, go back to step 1 and follow the procedure again. The next screenshot shows you how the program will look like when it is loading. This is how the IDE looks when no code is written into it. Your first program Now that you have your IDE ready and open, it is time to start exploring. As promised before, the Arduino IDE comes with many examples, libraries, and helping tools to get curious minds such as you to get started soon. Let us now look at how you can access your first program via the Arduino IDE. A large number of examples can be accessed in the File > Examples option as shown in the following screenshot. Just like we all have nicknames in school, a program, written in in processing is called a 'sketch'. Whenever you write any program for Arduino, it is important that you save your work. Programs written in processing are saved with the extension .ino The name .ino is derived from the last 3 letters of the word ArduINO. What are the other extensions are you aware of? (Hint: .doc, .ppt etc) Make a note in your inventor's notebook. Now ask yourself why do so many extensions exist. An extension gives the computer, the address of the software which will open the file, so that when the contents are displayed, it makes sense. As we learnt above, that the program written in the Arduino IDE is called a 'Sketch'. Your first sketch is named 'blink'. What does it do? Well, it makes your Arduino blink! For now, we can concentrate on the code. Click on File | Examples | Basics | Blink. Refer to next image for this. When you load an example sketch, this is how it would look like. In the image below you will be able to identify the structure of code, recall the meaning of functions and integers from the previous section. We learnt that the Arduino IDE is a compiler too! After opening your first example, we can now learn how to hide information from the compiler. If you want to insert any extra information in plain English, you can do so by using symbols as following. /* your text here*/ OR // your text here Comments can also be individually inserted above lines of code, explaining the functions. It is good practice to write comments, as it would be useful when you are visiting back your old code to modify at a later date. Try editing the contents of the comment section by spelling out your name. The following screenshot will show you how your edited code will look like. Verifying your first sketch Now that you have your first complete sketch in the IDE, how do you confirm that your micro-controller with understand? You do this, by clicking on the easy to locate Verify button with a small . You will now see that the IDE informs you "Done compiling" as shown in the screenshot below. It means that you have successfully written and verified your first code. Congratulations, you did it! Saving your first sketch As we learnt above, it is very important to save your work. We now learn the steps to make sure that your work does not get lost. Now that you have your first code inside your IDE, click on File > SaveAs. The following screenshot will show you how to save the sketch. Give an appropriate name to your project file, and save it just like you would save your files from Paintbrush, or any other software that you use. The file will be saved in a .ino format. Accessing your first sketch Open the folder where you saved the sketch. Double click on the .ino file. The program will open in a new window of IDE. The following screen shot has been taken from a Mac Os, the file will look different in a Linux or a Windows system. Summary Now we know about systems and how logic is used to solve problems. We can write and modify simple code. We also know the basics of Arduino IDE and studied how to verify, save and access your program. Resources for Article:  Further resources on this subject: Getting Started with Arduino [article] Connecting Arduino to the Web [article] Functions with Arduino [article]
Read more
  • 0
  • 0
  • 24358
Packt
06 Apr 2017
15 min read
Save for later

Synchronization – An Approach to Delivering Successful Machine Learning Projects

Packt
06 Apr 2017
15 min read
“In the midst of chaos, there is also opportunity”                                                                                                                - Sun Tzu In this article, by Cory Lesmeister, the author of the book Mastering Machine Learning with R - Second Edition, Cory provides insights on ensuring the success and value of your machine learning endeavors. (For more resources related to this topic, see here.) Framing the problem Raise your hand if any of the following has happened or is currently happening to you: You’ve been part of a project team that failed to deliver anything of business value You attend numerous meetings, but they don’t seem productive; maybe they are even complete time wasters Different teams are not sharing information with each other; thus, you are struggling to understand what everyone else is doing, and they have no idea what you are doing or why you are doing it An unknown stakeholder, feeling threatened by your project, comes from out of nowhere and disparages you and/or your work The Executive Committee congratulates your team on their great effort, but decides not to implement it, or even worse, tells you to go back and do it all over again, only this time solve the real problem OK, you can put your hand down now. If you didn’t raise your hand, please send me your contact information because you are about as rare as a unicorn. All organizations, regardless of their size, struggle with integrating different functions, current operations, and other projects. In short, the real-world is filled with chaos. It doesn’t matter how many advanced degrees people have, how experienced they are, how much money is thrown at the problem, what technology is used, how brilliant and powerful the machine learning algorithm is, problems such as those listed above will happen. The bottom line is that implementing machine learning projects in the business world is complicated and prone to failure. However, out of this chaos you have the opportunity to influence your organization by integrating disparate people and teams, fostering a collaborative environment that can adapt to unforeseen changes. But, be warned, this is not easy. If it was easy everyone would be doing it. However, it works and, it works well. By it, I’m talking about the methodology I developed about a dozen years ago, a method I refer to as the “Synchronization Process”. If we ask ourselves, “what are the challenges to implementation”, it seems to me that the following blog post, clearly and succinctly sums it up: https://www.capgemini.com/blog/capping-it-off/2012/04/four-key-challenges-for-business-analytics It enumerates four challenges: Strategic alignment Agility Commitment Information maturity This blog addresses business analytics, but it can be extended to machine learning projects. One could even say machine learning is becoming the analytics tool of choice in many organizations. As such, I will make the case below that the Synchronization Process can effectively deal with the first three challenges. Not only that, the process can provide additional benefits. By overcoming the challenges, you can deliver an effective project, by delivering an effective project you can increase actionable insights and by increasing actionable insights, you will improve decision-making, and that is where the real business value resides. Defining the process “In preparing for battle, I have always found that plans are useless, but planning is indispensable.”                                                                                       - Dwight D. Eisenhower I adopted the term synchronization from the US Army’s operations manual, FM 3-0 where it is described as a battlefield tenet and force multiplier. The manual defines synchronization as, “…arranging activities in time, space and purpose to mass maximum relative combat power at a decisive place and time”. If we overlay this military definition onto the context of a competitive marketplace, we come up with a definition I find more relevant. For our purpose, synchronization is defined as, “arranging business functions and/or tasks in time and purpose to produce the proper amount of focus on a critical event or events”. These definitions put synchronization in the context of an “endstate” based on a plan and a vision. However, it is the process of seeking to achieve that endstate that the true benefits come to fruition. So, we can look at synchronization as not only an endstate, but also as a process. The military’s solution to synchronizing operations before implementing a plan is the wargame. Like the military, businesses and corporations have utilized wargaming to facilitate decision-making and create integration of different business functions. Following the synchronization process techniques explained below, you can take the concept of business wargaming to a new level. I will discuss and provide specific ideas, steps, and deliverables that you can implement immediately. Before we begin that discussion, I want to cover the benefits that the process will deliver. Exploring the benefits of the process When I created this methodology about a dozen years ago, I was part of a market research team struggling to commit our limited resources to numerous projects, all of which were someone’s top priority, in a highly uncertain environment. Or, as I like to refer to it, just another day at the office. I knew from my military experience that I had the tools and techniques to successfully tackle these challenges. It worked then and has been working for me ever since. I have found that it delivers the following benefits to an organization: Integration of business partners and stakeholders Timely and accurate measurement of performance and effectiveness Anticipation of and planning for possible events Adaptation to unforeseen threats Exploitation of unforeseen opportunities Improvement in teamwork Fostering a collaborative environment Improving focus and prioritization In market research, and I believe it applies to all analytical endeavors, including machine learning, we talked about focusing on three specific questions about what to measure: What are we measuring? When do we measure it? How will we measure it? We found that successfully answering those questions facilitated improved decision-making by informing leadership what STOP doing, what to START doing and what to KEEP doing. I have found myself in many meetings going nowhere when I would ask a question like, “what are you looking to stop doing?” Ask leadership what they want to stop, start, or continue to do and you will get to the core of the problem. Then, your job will be to configure the business decision as the measurement/analytical problem. The Synchronization Process can bring this all together in a coherent fashion. I’ve been asked often about what triggers in my mind that a project requires going through the Synchronization Process. Here are some of the questions you should consider, and if you answer “yes” to any of them, it may be a good idea to implement the process: Are resources constrained to the point that several projects will suffer poor results or not be done at all? Do you face multiple, conflicting priorities? Could the external environment change and dramatically influence project(s)? Are numerous stakeholders involved or influenced by a project’s result? Is the project complex and facing a high-level of uncertainty? Does the project involve new technology? Does the project face the actual or potential for organizational change? You may be thinking, “Hey, we have a project manager for all this?” OK, how is that working out? Let me be crystal clear here, this is not just project management! This is about improving decision-making! A Gannt Chart or task management software won’t do that. You must be the agent of change. With that, let’s turn our attention to the process itself. Exploring the process Any team can take the methods elaborated on below and incorporate them to their specific situation with their specific business partners.  If executed properly, one can expect the initial investment in time and effort to provide substantial payoff within weeks of initiating the process. There are just four steps to incorporate with each having several tasks for you and your team members to complete. The four steps are as follows: Project kick-off Project analysis Synchronization exercise Project execution Let’s cover each of these in detail. I will provide what I like to refer to as a “Quad Chart” for each process step along with appropriate commentary. Project kick-off I recommend you lead the kick-off meeting to ensure all team members understand and agree to the upcoming process steps. You should place emphasis on the importance of completing the pre-work and understanding of key definitions, particularly around facts and critical assumptions. The operational definitions are as follows: Facts: Data or information that will likely have an impact on the project Critical assumptions: Valid and necessary suppositions in the absence of facts that, if proven false, would adversely impact planning or execution It is an excellent practice to link facts and assumptions. Here is an example of how that would work: It is a FACT that the Information Technology is beta-testing cloud-based solutions. We must ASSUME for planning purposes, that we can operate machine learning solutions on the cloud by the fourth quarter of this year. See, we’ve linked a fact and an assumption together and if this cloud-based solution is not available, let’s say it would negatively impact our ability to scale-up our machine learning solutions. If so, then you may want to have a contingency plan of some sort already thought through and prepared for implementation. Don’t worry if you haven’t thought of all possible assumptions or if you end up with a list of dozens. The synchronization exercise will help in identifying and prioritizing them. In my experience, identifying and tracking 10 critical assumptions at the project level is adequate. The following is the quad chart for this process step: Figure 1: Project kick-off quad chart Notice what is likely a new term, “Synchronization Matrix”. That is merely the tool used by the team to capture notes during the Synchronization Exercise. What you are doing is capturing time and events on the X-axis, and functions and terms on the Y-axis. Of course, this is highly customizable based on the specific circumstances and we will discuss more about it in process step number 3, that is Synchronization exercise, but here is an abbreviated example: Figure 2: Synchronization matrix example You can see in the matrix that I’ve included a row to capture critical assumptions. I can’t understate how important it is to articulate, capture, and track them. In fact, this is probably my favorite quote on the subject: … flawed assumptions are the most common cause of flawed execution. Harvard Business Review, The High Performance Organization, July-August 2005 OK, I think I’ve made my point, so let’s look at the next process step. Project analysis At this step, the participants prepare by analyzing the situation, collecting data, and making judgements as necessary. The goal is for each participant of the Synchronization Exercise to come to that meeting fully prepared. A good technique is to provide project participants with a worksheet template for them to use to complete the pre-work. A team can complete this step either individually, collectively or both. Here is the quad chart for the process step: Figure 3: Project analysis quad chart Let me expand on a couple of points. The idea of a team member creating information requirements is quite important. These are often tied back to your critical assumptions. Take the example above of the assumption around fielding a cloud-based capability. Can you think of some information requirements that might have as a potential end-user? Furthermore, can you prioritize them? OK, having done that, can you think of a plan to acquire that information and confirm or deny the underlying critical assumption? Notice also how that ties together with decision points you or others may have to make and how they may trigger contingency plans. This may sound rather basic and simplistic, but unless people are asked to think like this, articulate their requirements, share the information don’t expect anything to change anytime soon. It will be business as usual and let me ask again, “how is that working out for you?”. There is opportunity in all that chaos, so embrace it, and in the next step you will see the magic happen. Synchronization exercise The focus and discipline of the participants determine the success of this process step. This is a wargame-type exercise where team members portray their plan over time. Now, everyone gets to see how their plan relates to or even inhibits someone else’s plan and vice versa. I’ve done this step several different ways, including building the matrix on software, but the method that has consistently produced the best results is to build the matrix on large paper and put it along a conference room wall. Then, have the participants, one at a time, use post-it notes to portray their key events.  For example, the marketing manager gets up to the wall and posts “Marketing Campaign One” in the first time phase, “Marketing Campaign Two” in the final time phase, along with “Propensity Models” in the information requirements block. Iterating by participant and by time/event leads to coordination and cooperation like nothing you’ve ever seen. Another method to facilitate the success of the meeting is to have a disinterested and objective third party “referee” the meeting. This will help to ensure that any issues are captured or resolved and the process products updated accordingly. After the exercise, team members can incorporate the findings to their individual plans. This is an example quad chart on the process step: Figure 4: Synchronization exercise quad chart I really like the idea of execution and performance metrics. Here is how to think about them: Execution metrics—are we doing things right? Performance metrics—are we doing the right things? As you see, execution is about plan implementation, while performance metrics are about determining if the plan is making a difference (yes, I know that can be quite a dangerous thing to measure). Finally, we come to the fourth step where everything comes together during the execution of the project plan. Project execution This is a continual step in the process where a team can utilize the synchronization products to maintain situational understanding of the itself, key stakeholders, and the competitive environment. It can determine and how plans are progressing and quickly react to opportunities and threats as necessary. I recommend you update and communicate changes to the documentation on a regular basis. When I was in pharmaceutical forecasting, it was imperative that I end the business week by updating the matrices on SharePoint, which were available to all pertinent team members. The following is the quad chart for this process step: Figure 5: Project execution quad chart Keeping up with the documentation is a quick and simple process for the most part, and by doing so you will keep people aligned and cooperating. Be aware that like everything else that is new in the world, initial exuberance and enthusiasm will start to wane after several weeks. That is fine as long as you keep the documentation alive and maintain systematic communication. You will soon find that behavior is changing without anyone even taking heed, which is probably the best way to actually change behavior. A couple of words of warning. Don’t expect everyone to embrace the process wholeheartedly, which is to say that office politics may create a few obstacles. Often, an individual or even an entire business function will withhold information as “information is power”, and by sharing information they may feel they are losing power. Another issue may rise where some people feel it is needlessly complex or unnecessary. A solution to these problems is to scale back the number of core team members and utilize stakeholder analysis and a communication plan to bring they naysayers slowly into the fold. Change is never easy, but necessary nonetheless. Summary In this article, I’ve covered, at a high-level, a successful and proven process to deliver machine learning projects that will drive business value. I developed it from my numerous years of planning and evaluating military operations, including a one-year stint as a strategic advisor to the Iraqi Oil Police, adapting it to the needs of any organization. Utilizing the Synchronization Process will help any team avoid the common pitfalls of projects and improve efficiency and decision-making. It will help you become an agent of change and create influence in an organization without positional power. Resources for Article: Further resources on this subject: Machine Learning with R [article] Machine Learning Using Spark MLlib [article] Welcome to Machine Learning Using the .NET Framework [article]
Read more
  • 0
  • 0
  • 2181

article-image-using-raspberry-pi-camera-module
Packt
06 Apr 2017
6 min read
Save for later

Using the Raspberry Pi Camera Module

Packt
06 Apr 2017
6 min read
This article by Shervin Emami, co-author of the book, Mastering OpenCV 3 - Second Edition, explains how to use the Raspberry Pi Camera Module for your Cartoonifier and Skin changer applications. While using a USB webcam on Raspberry Pi has the convenience of supporting identical behavior & code on desktop as on embedded device, you might consider using one of the official Raspberry Pi Camera Modules (referred to as the RPi Cams). They have some advantages and disadvantages over USB webcams: (For more resources related to this topic, see here.) RPi Cams uses the special MIPI CSI camera format, designed for smartphone cameras to use less power, smaller physical size, faster bandwidth, higher resolutions, higher framerates, and reduced latency, compared to USB. Most USB2.0 webcams can only deliver 640x480 or 1280x720 30 FPS video, since USB2.0 is too slow for anything higher (except for some expensive USB webcams that perform onboard video compression) and USB3.0 is still too expensive. Whereas smartphone cameras (including the RPi Cams) can often deliver 1920x1080 30 FPS or even Ultra HD / 4K resolutions. The RPi Cam v1 can in fact deliver up to 2592x1944 15 FPS or 1920x1080 30 FPS video even on a $5 Raspberry Pi Zero, thanks to the use of MIPI CSI for the camera and a compatible video processing ISP & GPU hardware inside the Raspberry Pi. The RPi Cam also supports 640x480 in 90 FPS mode (such as for slow-motion capture), and this is quite useful for real-time computer vision so you can see very small movements in each frame, rather than large movements that are harder to analyze. However the RPi Cam is a plain circuit board that is highly sensitive to electrical interference, static electricity or physical damage (simply touching the small orange flat cable with your finger can cause video interference or even permanently damage your camera!). The big flat white cable is far less sensitive but it is still very sensitive to electrical noise or physical damage. The RPi Cam comes with a very short 15cm cable. It's possible to buy third-party cables on eBay with lengths between 5cm to 1m, but cables 50cm or longer are less reliable, whereas USB webcams can use 2m to 5m cables and can be plugged into USB hubs or active extension cables for longer distances. There are currently several different RPi Cam models, notably the NoIR version that doesn't have an internal Infrared filter, therefore a NoIR camera can easily see in the dark (if you have an invisible Infrared light source), or see Infrared lasers or signals far clearer than regular cameras that include an Infrared filter inside them. There are also 2 different versions of RPi Cam (shown above): RPi Cam v1.3 and RPi Cam v2.1, where the v2.1 uses a wider angle lens with a Sony 8 Mega-Pixel sensor instead of a 5 Mega-Pixel OmniVision sensor, and has better support for motion in low lighting conditions, and adds support for 3240x2464 15 FPS video and potentially upto 120 FPS video at 720p. However, USB webcams come in thousands of different shapes & versions, making it easy to find specialized webcams such as waterproof or industrial-grade webcams, rather than requiring you to create your own custom housing for an RPi Cam. IP Cameras are also another option for a camera interface that can allow 1080p or higher resolution videos with Raspberry Pi, and IP cameras support not just very long cables, but potentially even work anywhere in the world using Internet. But IP cameras aren't quite as easy to interface with OpenCV as USB webcams or the RPi Cam. In the past, RPi Cams and the official drivers weren't directly compatible with OpenCV, you often used custom drivers and modified your code in order to grab frames from RPi Cams, but it's now possible to access an RPi Cam in OpenCV the exact same way as a USB Webcam! Thanks to recent improvements in the v4l2 drivers, once you load the v4l2 driver the RPi Cam will appear as file /dev/video0 or /dev/video1 like a regular USB webcam. So traditional OpenCV webcam code such as cv::VideoCapture(0) will be able to use it just like a webcam. Installing the Raspberry Pi Camera Module driver First let's temporarily load the v4l2 driver for the RPi Cam to make sure our camera is plugged in correctly: sudo modprobe bcm2835-v4l2 If the command failed (that is, it printed an error message to the console, or it froze, or the command returned a number beside 0), then perhaps your camera is not plugged in correctly. Shutdown then unplug power from your RPi and try attaching the flat white cable again, looking at photos on the Web to make sure it's plugged in the correct way around. If it is the correct way around, it's possible the cable wasn't fully inserted before you closed the locking tab on the RPi. Also check if you forgot to click Enable Camera when configuring your Raspberry Pi earlier, using the sudo raspi-config command. If the command worked (that is, the command returned 0 and no error was printed to the console), then we can make sure the v4l2 driver for the RPi Cam is always loaded on bootup, by adding it to the bottom of the /etc/modules file: sudo nano /etc/modules # Load the Raspberry Pi Camera Module v4l2 driver on bootup: bcm2835-v4l2 After you save the file and reboot your RPi, you should be able to run ls /dev/video* to see a list of cameras available on your RPi. If the RPi Cam is the only camera plugged into your board, you should see it as the default camera (/dev/video0), or if you also have a USB webcam plugged in then it will be either /dev/video0 or /dev/video1. Let's test the RPi Cam using the starter_video sample program: cd ~/opencv-3.*/samples/cpp DISPLAY=:0 ./starter_video 0 If it's showing the wrong camera, try DISPLAY=:0 ./starter_video 1. Now that we know the RPi Cam is working in OpenCV, let's try Cartoonifier! cd ~/Cartoonifier DISPLAY=:0 ./Cartoonifier 0 (or DISPLAY=:0 ./Cartoonifier 1 for the other camera). Resources for Article: Further resources on this subject: Video Surveillance, Background Modeling [article] Performance by Design [article] Getting started with Android Development [article]
Read more
  • 0
  • 0
  • 5865

article-image-convolutional-neural-networks-reinforcement-learning
Packt
06 Apr 2017
9 min read
Save for later

Convolutional Neural Networks with Reinforcement Learning

Packt
06 Apr 2017
9 min read
In this article by Antonio Gulli, Sujit Pal, the authors of the book Deep Learning with Keras, we will learn about reinforcement learning, or more specifically deep reinforcement learning, that is, the application of deep neural networks to reinforcement learning. We will also see how convolutional neural networks leverage spatial information and they are therefore very well suited for classifying images. (For more resources related to this topic, see here.) Deep convolutional neural network A Deep Convolutional Neural Network (DCCN) consists of many neural network layers. Two different types of layers, convolutional and pooling, are typically alternated. The depth of each filter increases from left to right in the network. The last stage is typically made of one or more fully connected layers as shown here: There are three key intuitions beyond ConvNets: Local receptive fields Shared weights Pooling Let's review them together. Local receptive fields If we want to preserve the spatial information, then it is convenient to represent each image with a matrix of pixels. Then, a simple way to encode the local structure is to connect a submatrix of adjacent input neurons into one single hidden neuron belonging to the next layer. That single hidden neuron represents one local receptive field. Note that this operation is named convolution and it gives the name to this type of networks. Of course we can encode more information by having overlapping submatrices. For instance let's suppose that the size of each single submatrix is 5 x 5 and that those submatrices are used with MNIST images of 28 x 28 pixels, then we will be able to generate 23 x 23 local receptive field neurons in the next hidden layer. In fact it is possible to slide the submatrices by only 23 positions before touching the borders of the images. In Keras, the size of each single submatrix is called stride-length and this is an hyper-parameter which can be fine-tuned during the construction of our nets. Let's define the feature map from one layer to another layer. Of course we can have multiple feature maps which learn independently from each hidden layer. For instance we can start with 28 x 28 input neurons for processing MINST images, and then recall k feature maps of size 23 x 23 neurons each (again with stride of 5 x 5) in the next hidden layer. Shared weights and bias Let's suppose that we want to move away from the pixel representation in a row by gaining the ability of detecting the same feature independently from the location where it is placed in the input image. A simple intuition is to use the same set of weights and bias for all the neurons in the hidden layers. In this way each layer will learn a set of position-independent latent features derived from the image. Assuming that the input image has shape (256, 256) on 3 channels with tf (Tensorflow) ordering, this is represented as (256, 256, 3). Note that with th (Theano) mode the channels dimension (the depth) is at index 1, in tf mode is it at index 3. In Keras if we want to add a convolutional layer with dimensionality of the output 32 and extension of each filter 3 x 3 we will write: model = Sequential() model.add(Convolution2D(32, 3, 3, input_shape=(256, 256, 3)) This means that we are applying a 3 x 3 convolution on 256 x 256 image with 3 input channels (or input filters) resulting in 32 output channels (or output filters). An example of convolution is provided in the following diagram: Pooling layers Let's suppose that we want to summarize the output of a feature map. Again, we can use the spatial contiguity of the output produced from a single feature map and aggregate the values of a submatrix into one single output value synthetically describing the meaning associated with that physical region. Max pooling One easy and common choice is the so-called max pooling operator which simply outputs the maximum activation as observed in the region. In Keras, if we want to define a max pooling layer of size 2 x 2 we will write: model.add(MaxPooling2D(pool_size = (2, 2))) An example of max pooling operation is given in the following diagram: Average pooling Another choice is the average pooling which simply aggregates a region into the average values of the activations observed in that region. Keras implements a large number of pooling layers and a complete list is available online. In short, all the pooling operations are nothing more than a summary operation on a given region. Reinforcement learning Our objective is to build a neural network to play the game of catch. Each game starts with a ball being dropped from a random position from the top of the screen. The objective is to move a paddle at the bottom of the screen using the left and right arrow keys to catch the ball by the time it reaches the bottom. As games go, this is quite simple. At any point in time, the state of this game is given by the (x, y) coordinates of the ball and paddle. Most arcade games tend to have many more moving parts, so a general solution is to provide the entire current game screen image as the state. The following diagram shows four consecutive screenshots of our catch game: Astute readers might note that our problem could be modeled as a classification problem, where the input to the network are the game screen images and the output is one of three actions - move left, stay, or move right. However, this would require us to provide the network with training examples, possibly from recordings of games played by experts. An alternative and simpler approach might be to build a network and have it play the game repeatedly, giving it feedback based on whether it succeeds in catching the ball or not. This approach is also more intuitive and is closer to the way humans and animals learn. The most common way to represent such a problem is through a Markov Decision Process (MDP). Our game is the environment within which the agent is trying to learn. The state of the environment at time step t is given by st (and contains the location of the ball and paddle). The agent can perform certain actions at (such as moving the paddle left or right). These actions can sometimes result in a reward rt, which can be positive or negative (such as an increase or decrease in the score). Actions change the environment and can lead to a new state st+1, where the agent can perform another action at+1, and so on. The set of states, actions and rewards, together with the rules for transitioning from one state to the other, make up a Markov decision process. A single game is one episode of this process, and is represented by a finite sequence of states, actions and rewards: Since this is a Markov decision process, the probability of state st+1 depends only on current state st and action at. Maximizing future rewards As an agent, our objective is to maximize the total reward from each game. The total reward can be represented as follows: In order to maximize the total reward, the agent should try to maximize the total reward from any time point t in the game. The total reward at time step t is given by Rt and is represented as: However, it is harder to predict the value of the rewards the further we go into the future. In order to take this into consideration, our agent should try to maximize the total discounted future reward at time t instead. This is done by discounting the reward at each future time step by a factor γ over the previous time step. If γ is 0, then our network does not consider future rewards at all, and if γ is 1, then our network is completely deterministic. A good value for γ is around 0.9. Factoring the equation allows us to express the total discounted future reward at a given time step recursively as the sum of the current reward and the total discounted future reward at the next time step: Q-learning Deep reinforcement learning utilizes a model-free reinforcement learning technique called Q-learning. Q-learning can be used to find an optimal action for any given state in a finite Markov decision process. Q-learning tries to maximize the value of the Q-function which represents the maximum discounted future reward when we perform action a in state s: Once we know the Q-function, the optimal action a at a state s is the one with the highest Q-value. We can then define a policy π(s) that gives us the optimal action at any state: We can define the Q-function for a transition point (st, at, rt, st+1) in terms of the Q-function at the next point (st+1, at+1, rt+1, st+2) similar to how we did with the total discounted future reward. This equation is known as the Bellmann equation. The Q-function can be approximated using the Bellman equation. You can think of the Q-function as a lookup table (called a Q-table) where the states (denoted by s) are rows and actions (denoted by a) are columns, and the elements (denoted by Q(s, a)) are the rewards that you get if you are in the state given by the row and take the action given by the column. The best action to take at any state is the one with the highest reward. We start by randomly initializing the Q-table, then carry out random actions and observe the rewards to update the Q-table iteratively according to the following algorithm: initialize Q-table Q observe initial state s repeat select and carry out action a observe reward r and move to new state s' Q(s, a) = Q(s, a) + α(r + γ maxa' Q(s', a') - Q(s, a)) s = s' until game over You will realize that the algorithm is basically doing stochastic gradient descent on the Bellman equation, backpropagating the reward through the state space (or episode) and averaging over many trials (or epochs). Here α is the learning rate that determines how much of the difference between the previous Q-value and the discounted new maximum Q-value should be incorporated. Summary We have seen the application of deep neural networks, reinforcement learning. We have also seen convolutional neural networks and how they are well suited for classifying images. Resources for Article: Further resources on this subject: Deep learning and regression analysis [article] Training neural networks efficiently using Keras [article] Implementing Artificial Neural Networks with TensorFlow [article]
Read more
  • 0
  • 0
  • 26890
article-image-learning-cassandra
Packt
06 Apr 2017
17 min read
Save for later

Learning Cassandra

Packt
06 Apr 2017
17 min read
In this article by Sandeep Yarabarla, the author of the book Learning Apache Cassandra - Second Edition, we will built products using relational databases like MySQL and PostgreSQL, and perhaps experimented with NoSQL databases including a document store like MongoDB or a key-value store like Redis. While each of these tools has its strengths, you will now consider whether a distributed database like Cassandra might be the best choice for the task at hand. In this article, we'll begin with the need for NoSQL databases to satisfy the conundrum of ever growing data. We will see why NoSQL databases are becoming the de facto choice for big data and real-time web applications. We will also talk about the major reasons to choose Cassandra from among the many database options available to you. Having established that Cassandra is a great choice, we'll go through the nuts and bolts of getting a local Cassandra installation up and running (For more resources related to this topic, see here.) What is Big Data Big Data is a relatively new term which has been gathering steam over the past few years. Big Data is a term used for data sets that are relatively large to be stored in a traditional database system or processed by traditional data processing pipelines. This data could be structured, semi-structured or unstructured data. The data sets that belong to this category usually scale to terabytes or petabytes of data. Big Data usually involves one or more of the following: Velocity: Data moves at an unprecedented speed and must be dealt with it in a timely manner. Volume: Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. This could involve terabytes to petabytes of data. In the past, storing it would've been a problem – but new technologies have eased the burden. Variety: Data comes in all sorts of formats ranging from structured data to be stored in traditional databases to unstructured data (blobs) such as images, audio files and text files. These are known as the 3 Vs of Big Data. In addition to these, we tend to associate another term with Big Data. Complexity: Today's data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it's necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control. It must be able to traverse multiple data centers, cloud and geographical zones. Challenges of modern applications Before we delve into the shortcomings of relational systems to handle Big Data, let's take a look at some of the challenges faced by modern web facing and Big Data applications. Later, this will give an insight into how NoSQL data stores or Cassandra in particular helps solve these issues. One of the most important challenges faced by a web facing application is it should be able to handle a large number of concurrent users. Think of a search engine like google which handles millions of concurrent users at any given point of time or a large online retailer. The response from these applications should be swift even as the number of users keeps on growing. Modern applications need to be able to handle large amounts of data which can scale to several petabytes of data and beyond. Consider a large social network with a few hundred million users: Think of the amount of data generated in tracking and managing those users Think of how this data can be used for analytics Business critical applications should continue running without much impact even when there is a system failure or multiple system failures (server failure, network failure, and so on.). The applications should be able to handle failures gracefully without any data loss or interruptions. These applications should be able to scale across multiple data centers and geographical regions to support customers from different regions around the world with minimum delay. Modern applications should be implementing fully distributed architectures and should be capable of scaling out horizontally to support any data size or any number of concurrent users. Why not relational Relational database systems (RDBMS) have been the primary data store for enterprise applications for 20 years. Lately, NoSQL databases have been picking a lot of steam and businesses are slowly seeing a shift towards non-relational databases. There are a few reasons why relational databases don't seem like a good fit for modern web applications. Relational databases are not designed for clustered solutions. There are some solutions that shard data across servers but these are fragile, complex and generally don't work well. Sharding solutions implemented by RDBMS MySQL's product MySQL cluster provides clustering support which adds many capabilities of non-relational systems. It is actually a NoSQL solution that integrates with MySQL relational database. It partitions the data onto multiple nodes and the data can be accessed via different APIs. Oracle provides a clustering solution, Oracle RAC which involves multiple nodes running an oracle process accessing the same database files. This creates a single point of failure as well as resource limitations in accessing the database itself. They are not a good fit current hardware and architectures. Relational databases are usually scaled up by using larger machines with more powerful hardware and maybe clustering and replication among a small number of nodes. Their core architecture is not a good fit for commodity hardware and thus doesn't work with scale out architectures. Scale out vs Scale up architecture Scaling out means adding more nodes to a system such as adding more servers to a distributed database or filesystem. This is also known as horizontal scaling. Scaling up means adding more resources to a single node within the system such as adding more CPU, memory or disks to a server. This is also known as vertical scaling. How to handle Big Data Now that we are convinced the relational model is not a good fit for Big Data, let's try to figure out ways to handle Big Data. These are the solutions that paved the way for various NoSQL databases. Clustering: The data should be spread across different nodes in a cluster. The data should be replicated across multiple nodes in order to sustain node failures. This helps spread the data across the cluster and different nodes contain different subsets of data. This improves performance and provides fault tolerance. A node is an instance of database software running on a server. Multiple instances of the same database could be running on the same server. Flexible schema: Schemas should be flexible unlike the relational model and should evolve with data. Relax consistency: We should embrace the concept of eventual consistency which means data will eventually be propagated to all the nodes in the cluster (in case of replication). Eventual consistency allows data replication across nodes with minimum overhead. This allows for fast writes with the need for distributed locking. De-normalization of data: De-normalize data to optimize queries. This has to be done at the cost of writing and maintaining multiple copies of the same data. What is Cassandra and why Cassandra Cassandra is a fully distributed, masterless database, offering superior scalability and fault tolerance to traditional single master databases. Compared with other popular distributed databases like Riak, HBase, and Voldemort, Cassandra offers a uniquely robust and expressive interface for modeling and querying data. What follows is an overview of several desirable database capabilities, with accompanying discussions of what Cassandra has to offer in each category. Horizontal scalability Horizontal scalability refers to the ability to expand the storage and processing capacity of a database by adding more servers to a database cluster. A traditional single-master database's storage capacity is limited by the capacity of the server that hosts the master instance. If the data set outgrows this capacity, and a more powerful server isn't available, the data set must be sharded among multiple independent database instances that know nothing of each other. Your application bears responsibility for knowing to which instance a given piece of data belongs. Cassandra, on the other hand, is deployed as a cluster of instances that are all aware of each other. From the client application's standpoint, the cluster is a single entity; the application need not know, nor care, which machine a piece of data belongs to. Instead, data can be read or written to any instance in the cluster, referred to as a node; this node will forward the request to the instance where the data actually belongs. The result is that Cassandra deployments have an almost limitless capacity to store and process data. When additional capacity is required, more machines can simply be added to the cluster. When new machines join the cluster, Cassandra takes careof rebalancing the existing data so that each node in the expanded cluster has a roughly equal share. Also, the performance of a Cassandra cluster is directly proportional to the number of nodes within the cluster. As you keep on adding instances, the read and write throughput will keep increasing proportionally. Cassandra is one of the several popular distributed databases inspired by the Dynamo architecture, originally published in a paper by Amazon. Other widely used implementations of Dynamo include Riak and Voldemort. You can read the original paper at http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf High availability The simplest database deployments are run as a single instance on a single server. This sort of configuration is highly vulnerable to interruption: if the server is affected by a hardware failure or network connection outage, the application's ability to read and write data is completely lost until the server is restored. If the failure is catastrophic, the data on that server might be lost completely. Master-slave architectures improve this picture a bit. The master instance receives all write operations, and then these operations are replicated to follower instances. The application can read data from the master or any of the follower instances, so a single host becoming unavailable will not prevent the application from continuing to read data. A failure of the master, however, will still prevent the application from performing any write operations, so while this configuration provides high read availability, it doesn't completely provide high availability. Cassandra, on the other hand, has no single point of failure for reading or writing data. Each piece of data is replicated to multiple nodes, but none of these nodes holds the authoritative master copy. All the nodes in a Cassandra cluster are peers without a master node. If a machine becomes unavailable, Cassandra will continue writing data to the other nodes that share data with that machine, and will queue the operations and update the failed node when it rejoins the cluster. This means in a typical configuration, multiple nodes must fail simultaneously for there to be any application-visible interruption in Cassandra's availability. How many copies? When you create a keyspace - Cassandra's version of a database, you specify how many copies of each piece of data should be stored; this is called the replication factor. A replication factor of 3 is a common choice for most use cases. Write optimization Traditional relational and document databases are optimized for read performance. Writing data to a relational database will typically involve making in-place updates to complicated data structures on disk, in order to maintain a data structure that can be read efficiently and flexibly. Updating these data structures is a very expensive operation from a standpoint of disk I/O, which is often the limiting factor for database performance. Since writes are more expensive than reads, you'll typically avoid any unnecessary updates to a relational database, even at the expense of extra read operations. Cassandra, on the other hand, is highly optimized for write throughput, and in fact never modifies data on disk; it only appends to existing files or creates new ones. This is much easier on disk I/O and means that Cassandra can provide astonishingly high write throughput. Since both writing data to Cassandra, and storing data in Cassandra, are inexpensive, denormalization carries little cost and is a good way to ensure that data can be efficiently read in various access scenarios. Because Cassandra is optimized for write volume; you shouldn't shy away from writing data to the database. In fact, it's most efficient to write without reading whenever possible, even if doing so might result in redundant updates. Just because Cassandra is optimized for writes doesn't make it bad at reads; in fact, a well-designed Cassandra database can handle very heavy read loads with no problem. Structured records The first three database features we looked at are commonly found in distributed data stores. However, databases like Riak and Voldemort are purely key-value stores; these databases have no knowledge of the internal structure of a record that's stored at a particular key. This means useful functions like updating only part of a record, reading only certain fields from a record, or retrieving records that contain a particular value in a given field are not possible. Relational databases like PostgreSQL, document stores like MongoDB, and, to a limited extent, newer key-value stores like Redis do have a concept of the internal structure of their records, and most application developers are accustomed to taking advantage of the possibilities this allows. None of these databases, however, offer the advantages of a masterless distributed architecture. In Cassandra, records are structured much in the same way as they are in a relational database—using tables, rows, and columns. Thus, applications using Cassandra can enjoy all the benefits of masterless distributed storage while also getting all the advanced data modeling and access features associated with structured records. Secondary indexes A secondary index, commonly referred to as an index in the context of a relational database, is a structure allowing efficient lookup of records by some attribute other than their primary key. This is a widely useful capability: for instance, when developing a blog application, you would want to be able to easily retrieve all of the posts written by a particular author. Cassandra supports secondary indexes; while Cassandra's version is not as versatile as indexes in a typical relational database, it's a powerful feature in the right circumstances. Materialized views Data modelling principles in Cassandra compel us to denormalize data as much as possible. Prior to Cassandra 3.0, the only way to query on a non-primary key column was to create a secondary index and query on it. However, secondary indexes have a performance tradeoff if it contains high cardinality data. Often, high cardinality secondary indexes have to scan data on all the nodes and aggregate them to return the query results. This defeats the purpose of having a distributed system. To avoid secondary indexes and client-side denormalization, Cassandra introduced the feature of Materialized views which does server-side denormalization. You can create views for a base table and Cassandra ensures eventual consistency between the base and view. This lets us do very fast lookups on each view following the normal Cassandra read path. Materialized views maintain a correspondence of one CQL row each in the base and the view, so we need to ensure that each CQL row which is required for the views will be reflected in the base table's primary keys. Although, materialized view allows for fast lookups on non-primary key indexes; this comes at a performance hit to writes. Efficient result ordering It's quite common to want to retrieve a record set ordered by a particular field; for instance, a photo sharing service will want to retrieve the most recent photographs in descending order of creation. Since sorting data on the fly is a fundamentally expensive operation, databases must keep information about record ordering persisted on disk in order to efficiently return results in order. In a relational database, this is one of the jobs of a secondary index. In Cassandra, secondary indexes can't be used for result ordering, but tables can be structured such that rows are always kept sorted by a given column or columns, called clustering columns. Sorting by arbitrary columns at read time is not possible, but the capacity to efficiently order records in any way, and to retrieve ranges of records based on this ordering, is an unusually powerful capability for a distributed database. Immediate consistency When we write a piece of data to a database, it is our hope that that data is immediately available to any other process that may wish to read it. From another point of view, when we read some data from a database, we would like to be guaranteed that the data we retrieve is the most recently updated version. This guarantee is calledimmediate consistency, and it's a property of most common single-master databases like MySQL and PostgreSQL. Distributed systems like Cassandra typically do not provide an immediate consistency guarantee. Instead, developers must be willing to accept eventual consistency, which means when data is updated, the system will reflect that update at some point in the future. Developers are willing to give up immediate consistency precisely because it is a direct tradeoff with high availability. In the case of Cassandra, that tradeoff is made explicit through tunable consistency. Each time you design a write or read path for data, you have the option of immediate consistency with less resilient availability, or eventual consistency with extremely resilient availability. Discretely writable collections While it's useful for records to be internally structured into discrete fields, a given property of a record isn't always a single value like a string or an integer. One simple way to handle fields that contain collections of values is to serialize them using a format like JSON, and then save the serialized collection into a text field. However, in order to update collections stored in this way, the serialized data must be read from the database, decoded, modified, and then written back to the database in its entirety. If two clients try to perform this kind of modification to the same record concurrently, one of the updates will be overwritten by the other. For this reason, many databases offer built-in collection structures that can be discretely updated: values can be added to, and removed from collections, without reading and rewriting the entire collection. Neither the client nor Cassandra itself needs to read the current state of the collection in order to update it, meaning collection updates are also blazingly efficient. Relational joins In real-world applications, different pieces of data relate to each other in a variety of ways. Relational databases allow us to perform queries that make these relationships explicit, for instance, to retrieve a set of events whose location is in the state of New York (this is assuming events and locations are different record types). Cassandra, however, is not a relational database, and does not support anything like joins. Instead, applications using Cassandra typically denormalize data and make clever use of clustering in order to perform the sorts of data access that would use a join in a relational database. For datasets that aren't already denormalized, applications can also perform client-side joins, which mimic the behavior of a relational database by performing multiple queries and joining the results at the application level. Client-side joins are less efficient than reading data that has been denormalized in advance, but offer more flexibility. Summary In this article, you explored the reasons to choose Cassandra from among the many databases available, and having determined that Cassandra is a great choice, you installed it on your development machine. You had your first taste of the Cassandra Query Language when you issued your first few commands through the CQL shell in order to create a keyspace, table, and insert and read data. You're now poised to begin working with Cassandra in earnest. Resources for Article: Further resources on this subject: Setting up a Kubernetes Cluster [article] Dealing with Legacy Code [article] Evolving the data model [article]
Read more
  • 0
  • 0
  • 2233

article-image-hands-service-fabric
Packt
06 Apr 2017
12 min read
Save for later

Hands on with Service Fabric

Packt
06 Apr 2017
12 min read
In this article by Rahul Rai and Namit Tanasseri, authors of the book Microservices with Azure, explains that Service Fabric as a platform supports multiple programming models. Each of which is best suited for specific scenarios. Each programming model offers different levels of integration with the underlying management framework. Better integration leads to more automation and lesser overheads. Picking the right programming model for your application or services is the key to efficiently utilize the capabilities of Service Fabric as a hosting platform. Let's take a deeper look into these programming models. (For more resources related to this topic, see here.) To start with, let's look at the least integrated hosting option: Guest Executables. Native windows applications or application code using Node.js or Java can be hosted on Service Fabric as a guest executable. These executables can be packaged and pushed to a Service Fabric cluster like any other services. As the cluster manager has minimal knowledge about the executable, features like custom health monitoring, load reporting, state store and endpoint registration cannot be leveraged by the hosted application. However, from a deployment standpoint, a guest executable is treated like any other service. This means that for a guest executable, Service Fabric cluster manager takes care of high availability, application lifecycle management, rolling updates, automatic failover, high density deployment and load balancing. As an orchestration service, Service Fabric is responsible for deploying and activating an application or application services within a cluster. It is also capable of deploying services within a container image. This programming model is addressed as Guest Containers. The concept of containers is best explained as an implementation of operating system level virtualization. They are encapsulated deployable components running on isolated process boundaries sharing the same kernel. Deployed applications and their runtime dependencies are bundles within the container with an isolated view of all operating system constructs. This makes containers highly portable and secure. Guest container programming model is usually chosen when this level of isolation is required for the application. As containers don't have to boot an operating system, they have fast boot up time and are comparatively small in size. A prime benefit of using Service Fabric as a platform is the fact that it supports heterogeneous operating environments. Service Fabric supports two types of containers to be deployed as guest containers: Docker containers on Linux and Windows server containers. Container images for Docker containers are stored in Docker Hub and Docker APIs are used to create and manage the containers deployed on Linux kernel. Service Fabric supports two different types of containers in Windows Server 2016 with different levels of isolation. They are: Windows Server containers and Windows Hyper-V containers Windows Server containers are similar to Docker containers in terms of the isolation they provide. Windows Hyper-V containers offer higher degree of isolation and security by not sharing the operating system kernel across instances. These are ideally used when a higher level of security isolation is required such as systems requiring hostile multitenant hosts. The following figure illustrates the different isolation levels achieved by using these containers. Container isolation levels Service Fabric application model treats containers as an application host which can in turn host service replicas. There are three ways of utilizing containers within a Service Fabric application mode. Existing applications like Node.js, JavaScript application of other executables can be hosted within a container and deployed on Service Fabric as a Guest Container. A Guest Container is treated similar to a Guest Executable by Service Fabric runtime. The second scenario supports deploying stateless services inside a container hosted on Service Fabric. Stateless services using Reliable Services and Reliable actors can be deployed within a container. The third option is to deploy stateful services in containers hosted on Service Fabric. This model also supports Reliable Services and Reliable Actors. Service Fabric offers several features to manage containerized Microservices. These include container deployment and activation, resource governance, repository authentication, port mapping, container discovery and communication and ability to set environment variables. While containers offer a good level of isolation it is still heavy in terms of deployment footprint. Service Fabric offers a simpler, powerful programming model to develop your services which they call Reliable Services. Reliable services let you develop stateful and stateless services which can be directly deployed on Service Fabric clusters. For stateful services, the state can be stored close to the compute by using Reliable Collections. High availability of the state store and replication of the state is taken care by the Service Fabric cluster management services. This contributes substantially to the performance of the system by improving the latency of data access. Reliable services come with a built-in pluggable communication model which supports HTTP with Web API, WebSockets and custom TCP protocols out of the box. A Reliable service is addressed as stateless if it does not maintain any state within it or if the scope of the state stored is limited to a service call and is entirely disposable. This means that a stateless service does not require to persist, synchronize or replicate state. A good example for this service is a weather service like MSN weather service. A weather service can be queried to retrieve weather conditions associated with a specific geographical location. The response is totally based on the parameters supplied to the service. This service does not store any state. Although stateless services are simpler to implement, most of the services in real life are not stateless. They either store state in an external state store or an internal one. Web front end hosting APIs or web applications are good use cases to be hosted as stateless services. A stateful service persists states. The outcome of a service call made to a stateful service is usually influenced by the state persisted by the service. A service exposed by a bank to return the balance on an account is a good example for a stateful service. The state may be stored in an external data store such as Azure SQL Database, Azure Blobs or Azure Table store. Most services prefer to store the state externally considering the challenges around reliability, availability, scalability and consistency of the data store. With Service Fabric, state can be stored close to the compute by using reliable collections. To makes things more lightweight, Service Fabric also offers a programming model based on Virtual actor pattern. This programming model is called Reliable Actors. The Reliable Actors programming model is built on top of Reliable Services. This guarantees the scalability and reliability of the services. An Actor can be defined as an isolated, independent unit of compute and state with single-threaded execution. Actors can be created, managed and disposed independent of each other. Large number of actors can coexist and execute at a time. Service Fabric Reliable Actors are a good fit for systems which are highly distributed and dynamic by nature. Every actor is defined as an instance of an actor type; the same way an object is an instance of a class. Each actor is uniquely identified by an actor ID. The lifetime of Service Fabric Actors is not tied to their in-memory state. As a result, Actors are automatically created the first time a request for them is made. Reliable Actor's garbage collector takes care of disposing unused Actors in memory. Now that we understand the programming models, let's take a look at how the services deployed on Service Fabric are discovered and how the communication between services takes place. Service Fabric discovery and communication An application built on top of Microservices is usually composed of multiple services, each of which runs multiple replicas. Each service is specialized in a specific task. To achieve an end to end business use case, multiple services will need to be stitched together. This requires services to communicate to each other. A simple example would be web front end service communicating with the middle tier services which in turn connects to the back end services to handle a single user request. Some of these middle tier services can also be invoked by external applications. Services deployed on Service Fabric are distributed across multiple nodes in a cluster of virtual machines. The services can move across dynamically. This distribution of services can wither be triggered by a manual action of be result of Service Fabric cluster manager re-balancing services to achieve optimal resource utilization. This makes communication a challenge as services are not tied to a particular machine. Let's understand how Service Fabric solved this challenge for its consumers. Service protocols Service Fabric, as a hosting platform for Microservices does not interfere in the implementation of the service. On top of this, it also lets services decide on the communication channels they want to open. These channels are addressed as service endpoints. During service initiation, Service Fabric provides the opportunity for the services to set up the endpoints for incoming request on any protocol or communication stack. The endpoints are defined according to common industry standards, that is IP:Port. It is possible that multiple service instances share a single host process. In which case, they either have to use different ports or a port sharing mechanism. This will ensure that every service instance is uniquely addressable. Service endpoints Service discovery Service Fabric can rebalance services deployed on a cluster as a part of orchestration activities. This can be caused by resource balancing activities, failovers, upgrades, scale outs or scale ins. This will result in change in service endpoint addresses as the services move across different virtual machines. Service distribution The Service Fabric Naming Service is responsible for abstracting this complexity from the consuming service or application. Naming service takes care of service discovery and resolution. All service instances in Services Fabric are identified by a unique URL like fabric:/MyMicroServiceApp/AppService1. This name stays constant across the lifetime of the service although the endpoint addresses which physically host the service may change. Internally, Service Fabric manages a map between the service names and the physical location where the service is hosted. This is similar to the DNS service which is used to resolve Website URLs to IP addresses. The following figure illustrates the name resolution process for a service hosted on Service Fabric: Name resolution Connections from applications external to Service Fabric Service communications to or between services hosted in Service Fabric can be categorized as internal or external. Internal communication among services hosted on Service Fabric is easily achieved using the Naming Service. External communication, originated from an application or a user outside the boundaries of Service Fabric will need some extra work. To understand how this works, let's dive deeper in to the logical network layout of a typical Service Fabric cluster. Service Fabric cluster is always placed behind an Azure Load Balancer. The Load Balancer acts like a gateway to all traffic which needs to pass to the Service Fabric cluster. The Load Balancer is aware of every post open on every node of a cluster. When a request hits the Load Balancer, it identifies the port the request is looking for and randomly routes the request to one of the nodes which has the requested port open. The Load Balancer is not aware of the services running on the nodes or the ports associated with the services. The following figure illustrates request routing in action. Request routing Configuring ports and protocols The protocol and the ports to be opened by a Service Fabric cluster can be easily configured through the portal. Let's take an example to understand the configuration in detail. If we need a web application to be hosted on a Service Fabric cluster which should have port 80 opened on HTTP to accept incoming traffic, the following steps should be performed. Configuring service manifest Once a service listening to port 80 is authored, we need to configure port 80 in the service manifest to open a listener in the service. This can be done by editing the Service Manifest.xml. <Resources> <Endpoints> <Endpoint Name="WebEndpoint" Protocol="http" Port="80" /> </Endpoints> </Resources> Configuring custom end point On the Service Fabric cluster, configure port 80 as a custom endpoint. This can be easily done through the Azure Management portal. Configuring custom port Configure Azure Load Balancer Once the cluster is configured and created, the Azure Load Balancer can be instructed to forward the traffic to port 80. If the Service Fabric cluster is created through the portal, this step is automatically taken care for every port which is configured on the cluster configuration. Configuring Azure Load Balancer Configure health check Azure Load Balancer probes the ports on the nodes for their availability to ensure reliability of the service. The probes can be configured on the Azure portal. This is an optional step as a default probe configuration is applied for each endpoint when a cluster is created. Configuring probe Built-in Communication API Service Fabric offers many built-in communication options to support inter service communications. Service Remoting is one of them. This option allows strong typed remote procedure calls between Reliable Services and Reliable Actors. This option is very easy to set up and operate with as Service Remoting handles resolution of service addresses, connection, retry and error handling. Service Fabric also supports HTTP for language-agnostic communication. Service Fabric SDK exposes ICommunicationClient and ServicePartitionClient classes for service resolution, HTTP connections, and retry loops. WCF is also supported by Service Fabric as a communication channel to enable legacy workload to be hosted on it. The SDK exposed WcfCommunicationListener for the server side and WcfCommunicationClient and ServicePartitionClient classes for the client to ease programming hurdles. Resources for Article: Further resources on this subject: Installing Neutron [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article] Insight into Hyper-V Storage [article]
Read more
  • 0
  • 0
  • 18655
Modal Close icon
Modal Close icon