Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-overview-important-concepts-microsoft-dynamics-nav-2016
Packt
19 Jun 2017
15 min read
Save for later

Overview of Important Concepts of Microsoft Dynamics NAV 2016

Packt
19 Jun 2017
15 min read
In this article by Rabindra Sah, author of the book, Mastering Microsoft Dynamics NAV 2016, we will cover the important concepts of Microsoft Dynamics NAV 2016.   (For more resources related to this topic, see here.) Version control Version control systems are the third-party system that provides the service that tracks changes in the file and folders of the system. In this section, we will be discussing two popular version control systems for Microsoft Dynamics NAV. Let's take a look at the web services architecture in Dynamics NAV. Microsoft Dynamics NAV 2016 uses two types of web services: SOAP web services and OData web services:   ODATA SOAP Page Yes Yes Query Yes No Codeunit No Yes The main difference between the two in Microsoft Dynamics NAV is that with SOAP web services, you can publish and reuse business logic, and with OData web services, you can change data to external applications. In a dataset, you can make certain changes in order to improve the performance of the report. This is not always applicable for all reports; it depends on the nature of the problem, and you should spend time analyzing the problem before fixing it. The following are some of the basic considerations that you should keep in mind when dealing with datasets in NAV reports:  Try to avoid the creation of dataset lines if possible; create variables instead Try to reduce the number of rows and columns Apply filters to the request page For slow reports with higher runtime, use the job queue to generate the report in the server Use text constants for the caption, if needed Try to avoid Include Caption for the column with no need for captions  Technical upgrade Technical upgrade is the least used upgrade process, which is when you are making one version upgrade at a time, that is, from Version 2009 to Version 2013 or Version 2013 to Version 2015. So, when you are planning to jump multiple versions at the same time, then technical upgrade might not be the perfect option to choose. It can be efficient when there are minimal changes in the source database objects, that is, fewer customizations. It can also be considered an efficient choice when the business requirement from the product is still the same or has very less significant changes. Upgrading estimates In this section, we are going to look at the core components that are responsible for the estimates of the upgrade process. The components that are to be considered while estimating for the upgrade process are as follows:   Code upgrade Object transformation Data upgrade Testing and implementation   Code upgrade The best method to estimate the code upgrade is to use a file compare tool. It helps us compare the file, folder, version control, conflicts and resolution, automatic intelligent merging, in-place editing of files, track changes, and code analysis. You can also design your own compare tool if you want, for example, take two version of the same object, take two versions of Customer table. Open them in Notepad and check line by line whether there is any difference in the line, and then get that line value and show it as a log. You can achieve this via C# or any programming language. This should run for each object in two versions of the NAV system and provide you with the statistics of the amount of changes: This can be really handy when it comes to the point of estimation for the code changes. You can also do it manually if the number of objects is less. It is recommended that one must use the Microsoft Dynamics Sure Step methodology while carrying out any upgrade projects. Dynamics Sure step is a full life cycle methodology tool which is designed to provide a discipline and best practice to upgrade, migrate, configure and deploy Microsoft Dynamics NAV Solution. Object transformation We must take a close look in the case of some objects that are not directly upgraded. As, for example, if your source database reports are in the classic version or early RTC version, it might not be feasible to transform them into the latest reports because of the huge technological gap between the reports. In these cases, you must be very careful while estimating for these upgrades. For example, TransHeader and TransFooter and other categorizations that are present in classic reports are hard to map directly into Dynamics NAV 2016 reports. We might have to develop our own logic to achieve these grouping values, which might take some additional time. So, always treat this section as a customization instead of upgrade. Mostly, Microsoft partner vendors keep this section separate and, in most of the cases, separate resources are assigned to do that for parallel work environments. Reports can also have word layouts that should also be considered during the estimates. Data upgrade We perform a number of distinct steps while upgrading data. You must consider the time for each section in the data upgrade process in order to correctly estimate the time for it: The first thing that we do is a trial data upgrade process. This allows us to analyze different aspects, such as, to see how long it takes; if data upgrade process works or not; and will it allow us to test the results of this trial data upgrade. Once we are done with the trial data upgrade, we might need to do it a number of times before it works. Then, we can do a preproduction data upgrade because since the moment we started our analyses and development, the production data might have changed, so we also need to do a preproduction run to also have a closer estimate of the time windows that we have available when we are going to do the real implementation. Acceptance testing is also a very important phase. Once you have done the data upgrade, you need the end users or key users to confirm that our data has been converted correctly. And then you are ready to perform the live data upgrade. So all of these different phases in the data upgrade will also require some time. The amount of time will also depend on the size of the database or the version that you are starting from. So, this gives you an overview of the different pillars that are important to estimate how much time it might take to prepare and analyze the updates project. Software as a Service Software as a Service is a cloud services delivery model, which offers an on-demand online software subscription. The latest SaaS release of Microsoft is Dynamics 365 (previously known as Project Madeira). The following image illustrates the SAAS taxonomy. Here you can clearly understand different services like Sales force, NetSuite, QuickBooks which are distributed as SAAS: Software as a Service taxonomy Understanding the PowerShell cmdlets We can categorize the PowerShell commands into five major categories of use:   Commands for Server administrators Command for implementers for company management Commands for administrators for upgrades Commands for administrator for security Commands for developers   Commands for server administrators The first category contains commands that can be used for administrative operations like create, save, remove, get, import, export, set, and the like as given in the following table: Dismount-NAVTenant New-NAVServerConfiguration Export-NAVApplication New-NAVServerInstance Get-NAVApplication New-NAVWebServerInstance Get-NAVServerConfiguration Remove-NAVApplication Get-NAVServerInstance Remove-NAVServerInstance Get-NAVServerSession Remove-NAVServerSession Get-NAVTenant Save-NAVTenantConfiguration Get-NAVWebServerInstance Set-NAVServerConfiguration Export-NAVServerLicenseInformation Set-NAVServerInstance Import-NAVServerLicense Set-NAVWebServerInstanceConfiguration Mount-NAVApplication Sync-NAVTenant Mount-NAVTenant   We can set up web server instances, change configurations, and create a multitenant environment; we can only use PowerShell for a multitenant environment. Command for implementers for company management The second category of commands can be used by implementers, in particular, for operations related to installation and configuration of the system. The following are a few examples of this category of commands: Copy-NAVCompany Get-NAVCompany New-NAVCompany Remove-NAVCompany Rename-NAVCompany Commands for administrators for upgrades The third category is a special category for administrators, which is related to upgradation of operations. Get-NAVDataUpgrade Resume-NAVDataUpgrade Start-NAVDataUpgrade Stop-NAVDataUpgrade The third category of commands can be useful along with the upgrade toolkit. Commands for administrator for security This is one of the most important categories, which is related to the backend of the system. The commands in this category grant the power of accessibility and permission to the administrators. I strongly recommend these make-life-easy commands if you are working on security operations. Commands in this category include the following: Get-NAVServerUser Remove-NAVServerPermission Get-NAVServerUserPermissionSet Remove-NAVServerPermissionSet New-NAVServerPermission Remove-NAVServerUser New-NAVServerPermissionSet Remove-NAVServerUserPermissionSet New-NAVServerUser Set-NAVServerPermission New-NAVServerUserPermissionSet Set-NAVServerPermissionSet   Set-NAVServerUser These commands can be used basically to add users, and for permission set. Commands for developers The last, but not the least, treasure of commands is dedicated to developers, and is one of my most-used commands. It covers a wide range of commands, and should be included in your daily work life. This set of commands includes the following: Get-NAVWebService Join-NAVApplicationObjectFile Invoke-NAVCodeunit Join-NAVApplicationObjectLanguageFile New-NAVWebService Merge-NAVApplicationObject Remove-NAVWebService Remove-NAVApplicationObjectLanguage Compare-NAVApplicationObject Set-NAVApplicationObjectProperty Export-NAVAppliactionObjectLanguage Split-NAVApplicationApplicationObjectFile Get-NAVApplicationObjectProperty Split-NAVApplicationObjectLanguageFile Import-NAVApplicationObjectLanguage Test-NAVApplicationObjectLanguage   Update-NAVApplicationObject Microsoft Dynamics NAV 2016 Posting preview In Microsoft Dynamics NAV 2016, you can review the entries to be created before you post a document or journal. This is made possible by the introduction of a new feature called Preview Posting, which enables you to preview the impact of a posting against each ledger associated with a document. In every document and journal that can be posted, you can click on Preview Posting to review the different types of entries that will be created when you post the document or journal. Workflow Workflow enables you to understand modern life business processes along with the best practices or industry standard practice. For example, ensuring that a customer credit limit has been independently verified and the requirement of two approvers for a payment process has been met. Workflow has these three main capabilities:   Approvals Notifications Automation   Workflow basically has three components, that is, Event, Condition, and Response. Event defines the name of any event in the system, whereas On Condition specifies the event, and the Then response is the action that needs to be taken on the basis of that condition. This is shown in the following screenshot: Exception handling Exception handling is a new concept in Microsoft Dynamics NAV. It was imported from .NET, and is now gaining popularity among the C/AL programmers because of its effective usage. Like C#, for exception handling, we use the Try function. The Try functions are the new additions to the function library, which enable you to handle errors that occur in the application during runtime. Here we are not dealing with compile time issues. For example, the message Error Returned: Divisible by Zero Error. is always a critical error, and should be handled in order to be avoided. This also stops the system from entering into the unsafe state. Like C sharp and other rich programming languages, the Try functions in C/AL provide easy-to-understand error messages, which can also be dynamic and directly generated by the system. This feature helps us preplan those errors and present better descriptive errors to the users. You can use the Try functions to catch errors/exceptions that are thrown by Microsoft Dynamics NAV or exceptions that are thrown during the .NET Framework interoperability operations. The Try function is in many ways similar to the conditional Codeunit.Run function except for following points: The database records that are changed because of the Try function cannot be rolled back The Try function calls do not need to be committed to the database Visual Basic programming Visual Basic (VB)is an event-driven programming language. It is also an Integrated development environment (IDE). If you are familiar with BASIC programming language, then it will be easy to understand Visual Basic, since it is derived from BASIC. I will try to provide the basics about this language here, since it is the least discussed topic in the NAV community, but very essential to be understood by all report designers and developers. Here we do not need to understand each and every detail of the VB programming language, but understanding the syntax and structure will help us understand the code that we are going to use in the RDLC report. An example code of VB can be written as follows: Public Function BlankZero(ByVal Value As Decimal) if Value = 0 then Return "" End if Return Value End Function End Sub This preceding function, BlankZero, basically just returns the value of the parameter. This is simplest function which can be found in the code section of the RDLC report. Unlike C/AL, we do not need to end the code line with a colon (;): Writing your own Test unit Writing your own Test unit is very important, not just to test your code but also to give you an eagle's-eye view on how your code is actually interacting with the system. It gives your coding a meaning, and allows others to understand and relate to your development. Writing a unit test involves basically four steps as shown in the following diagram: Certificates A certificate is nothing but a token that binds an identity to a cryptographic key. Microsoft Management Console (MMC) is a presentation service for management applications in the Microsoft Windows environment. It is a part of the independent software vendor (ISV) extensible service, that is, it provides a common integrated environment for snap-ins, provided by Microsoft and third-party software vendors. Certificate authority A certification authority (CA) is an entity that issues certificates. If all certificates have a common issuer, then the issuer's public key can be distributed out of band: In the preceding diagram, the certificate server is the third party, which has a secure relationship with both the parties that want to communicate. CA is connected to both parties through a secure channel. User B sends a copy of his public key to the CA. Then the CA encrypts public key of User B using a different key. Two files are created because of this trigger: the first is an encrypted package, which is nothing but the certificate, and the second is the digital signature of the certificate server. The certificate server returns the certificate to user B. Now User A asks for certificate from User B. User B sends the copy of its public key to User A. This is again done using a secure communication channel. User A decrypts the certificate using the key obtained from the certificate server, and extracts public key of User B. User A also checks the digital signature of the certificate server to ensure that the certificate is authentic. Here, whatever data is encrypted using public key of User B, it can only be decrypted using the private key of User B, which is only present with User B and not with any of the intruders over the Internet. So only User B can decrypt and read the content send by User A. Once the keys are transferred, User A can communicate with User B. In case User B wants to send data to User A, then User B would need public key of User A, which will be again granted by CA. Certificates are issued to a principal. – The Issuance policy specifies the principals to which the CA will issue certificates. The certification authority does not need to be online to check the validity of the certificate. It can be present in a server of a locked room. It is only consulted when a principal needs a certificate. Certificates are a way of associating an identity with a public key and distinguished name. Authentication policy for CA The authentication policy defines the way principals prove their identities. Each CA has its own requirements constrained by contractual requirements such as with Primary Certification Authority (PCA): PCA issues certificates to CA CA issues certificates to individuals and organizations All rely on non-electronic proofs of identity, such as biometrics (fingerprints), documents (drivers license or passport), or personal knowledge. A specific authentication policy can be determined by checking the policy of the CA that signed the certificate. Kinds of certificates There are at least four kinds of certificates, which are as follows: Site certificates (for example, www.msdn.microsoft.com). Personal certificates (used if the server wants to authenticate the client). You can install a personal certificate in your browser. Software vendor certificates (used when software is installed). Often, when you run a program, a dialog box appears warning that The publisher could not be verified. Are you sure you want to run this software? This is caused either because the software does not have a software vendor certificate, or because you do not trust the CA who signed the software vendor certificate. Anonymous certificates, (used by a whistle blower to indicate that the same person sent a sequence of messages, but doesn't know who that person is). Other types of certificates Certificates can also be based on a principal's association with an organization (such as Microsoft (MSDN)), where the principal lives, or the role played in an organization (such as the comptroller). Summary In this article we covered the important concepts in Dynamics Nav 2016 such as version control, dataset considerations, technical upgrade, Software as a Service, certificates, and so on. Resources for Article: Further resources on this subject: Introduction to Microsoft Dynamics NAV 2016 [article] Customization in Microsoft Dynamics CRM [article] Exploring Microsoft Dynamics NAV – An Introduction [article]
Read more
  • 0
  • 0
  • 1999

article-image-introduction-cyber-extortion
Packt
19 Jun 2017
21 min read
Save for later

Introduction to Cyber Extortion

Packt
19 Jun 2017
21 min read
In this article by Dhanya Thakkar, the author of the book Preventing Digital Extortion, explains how often we make the mistake of relying on the past for predicting the future, and nowhere is this more relevant than in the sphere of the Internet and smart technology. People, processes, data, and things are tightly and increasingly connected, creating new, intelligent networks unlike anything else we have seen before. The growth is exponential and the consequences are far reaching for individuals, and progressively so for businesses. We are creating the Internet of Things and the Internet of Everything. (For more resources related to this topic, see here.) It has become unimaginable to run a business without using the Internet. It is not only an essential tool for current products and services, but an unfathomable well for innovation and fresh commercial breakthroughs. The transformative revolution is spillinginto the public sector, affecting companies like vanguards and diffusing to consumers, who are in a feedback loop with suppliers, constantly obtaining and demanding new goods. Advanced technologies that apply not only to machine-to-machine communication but also to smart sensors generate complex networks to which theoretically anything that can carry a sensor can be connected. Cloud computing and cloud-based applications provide immense yet affordable storage capacity for people and organizations and facilitate the spread of data in more ways than one. Keeping in mind the Internet’s nature, the physical boundaries of business become blurred, and virtual data protection must incorporate a new characteristic of security: encryption. In the middle of the storm of the IoT, major opportunities arise, and equally so, unprecedented risks lurk. People often think that what they put on the Internet is protected and closed information. It is hardly so. Sending an e-mail is not like sending a letter in a closed envelope. It is more like sending a postcard, where anyone who gets their hands on it can read what's written on it. Along with people who want to utilize the Internet as an open business platform, there are people who want to find ways of circumventing legal practices and misusing the wealth of data on computer networks by unlawfully gaining financial profits, assets, or authority that can be monetized. Being connected is now critical. As cyberspace is growing, so are attempts to violate vulnerable information gaining global scale. This newly discovered business dynamic is under persistent threat of criminals. Cyberspace, cyber crime, and cyber security are perceptibly being found in the same sentence. Cyber crime –under defined and under regulated A massive problem encouraging the perseverance and evolution of cyber crime is the lack of an adequate unanimous definition and the under regulation on a national, regional, and global level. Nothing is criminal unless stipulated by the law. Global law enforcement agencies, academia, and state policies have studied the constant development of the phenomenon since its first appearance in 1989, in the shape of the AIDS Trojan virus transferred from an infected floppy disk. Regardless of the bizarre beginnings, there is nothing entertaining about cybercrime. It is serious. It is dangerous. Significant efforts are made to define cybercrime on a conceptual level in academic research and in national and regional cybersecurity strategies. Still, as the nature of the phenomenon evolves, so must the definition. Research reports are still at a descriptive level, and underreporting is a major issue. On the other hand, businesses are more exposed due to ignorance of the fact that modern-day criminals increasingly rely on the Internet to enhance their criminal operations. Case in point: Aaushi Shah and Srinidhi Ravi from the Asian School of Cyber Laws have created a cybercrime list by compiling a set of 74 distinctive and creativelynamed actions emerging in the last three decades that can be interpreted as cybercrime. These actions target anything from e-mails to smartphones, personal computers, and business intranets: piggybacking, joe jobs, and easter eggs may sound like cartoons, but their true nature resembles a crime thriller. The concept of cybercrime Cyberspace is a giant community made out of connected computer users and data on a global level. As a concept, cybercrime involves any criminal act dealing withcomputers andnetworks, including traditional crimes in which the illegal activities are committed through the use of a computer and the Internet. As businesses become more open and widespread, the boundary between data freedom and restriction becomes more porous. Countless e-shopping transactions are made, hospitals keep record of patient histories, students pass exams, and around-the-clock payments are increasingly processed online. It is no wonder that criminals are relentlessly invading cyberspace trying to find a slipping crack. There are no recognizable border controls on the Internet, but a business that wants to evade harm needs to understand cybercrime's nature and apply means to restrict access to certain information. Instead of identifying it as a single phenomenon, Majid Jar proposes a common denominator approach for all ICT-related criminal activities. In his book Cybercrime and Society, Jar refers to Thomas and Loader’s working concept of cybercrime as follows: “Computer-mediated activities which are either illegal or considered illicit by certain parties and which can be conducted through global electronic network.” Jar elaborates the important distinction of this definition by emphasizing the difference between crime and deviance. Criminal activities are explicitly prohibited by formal regulations and bear sanctions, while deviances breach informal social norms. This is a key note to keep in mind. It encompasses the evolving definition of cybercrime, which keeps transforming after resourceful criminals who constantly think of new ways to gain illegal advantages. Law enforcement agencies on a global level make an essential distinction between two subcategories of cybercrime: Advanced cybercrime or high-tech crime Cyber-enabled crime The first subcategory, according to Interpol, includes newly emerged sophisticated attacks against computer hardware and software. On the other hand, the second category contains traditional crimes in modern clothes,for example crimes against children, such as exposing children to illegal content; financial crimes, such as payment card frauds, money laundering, and counterfeiting currency and security documents; social engineering frauds; and even terrorism. We are much beyond the limited impact of the 1989 cybercrime embryo. Intricate networks are created daily. They present new criminal opportunities, causing greater damage to businesses and individuals, and require a global response. Cybercrime is conceptualized as a service embracing a commercial component.Cybercriminals work as businessmen who look to sell a product or a service to the highest bidder. Critical attributes of cybercrime An abridged version of the cybercrime concept provides answers to three vital questions: Where are criminal activities committed and what technologies are used? What is the reason behind the violation? Who is the perpetrator of the activities? Where and how – realm Cybercrime can be an online, digitally committed, traditional offense. Even if the component of an online, digital, or virtual existence were not included in its nature, it would still have been considered crime in the traditional, real-world sense of the word. In this sense, as the nature of cybercrime advances, so mustthe spearheads of lawenforcement rely on laws written for the non-digital world to solve problems encountered online. Otherwise, the combat becomesstagnant and futile. Why – motivation The prefix "cyber" sometimes creates additional misperception when applied to the digital world. It is critical to differentiate cybercrime from other malevolent acts in the digital world by considering the reasoning behind the action. This is not only imperative for clarification purposes, but also for extending the definition of cybercrime over time to include previously indeterminate activities. Offenders commit a wide range of dishonest acts for selfish motives such as monetary gain, popularity, or gratification. When the intent behind the behavior is misinterpreted, confusion may arise and actions that should not have been classified as cybercrime could be charged with criminal prosecution. Who –the criminal deed component The action must be attributed to a perpetrator. Depending on the source, certain threats can be translated to the criminal domain only or expanded to endanger potential larger targets, representing an attack to national security or a terrorist attack. Undoubtedly, the concept of cybercrime needs additional refinement, and a comprehensive global definition is in progress. Along with global cybercrime initiatives, national regulators are continually working on implementing laws, policies, and strategies to exemplify cybercrime behaviors and thus strengthen combating efforts. Types of common cyber threats In their endeavors to raise cybercrime awareness, the United Kingdom'sNational Crime Agency (NCA) divided common and popular cybercrime activities by affiliating themwith the target under threat. While both individuals and organizations are targets of cyber criminals, it is the business-consumer networks that suffer irreparable damages due to the magnitude of harmful actions. Cybercrime targeting consumers Phishing The term encompasses behavior where illegitimate e-mails are sent to the receiver to collect security information and personal details Webcam manager A webcam manager is an instance of gross violating behavior in which criminals take over a person's webcam File hijacker Criminals hijack files and hold them "hostage" until the victim pays the demanded ransom Keylogging With keylogging, criminals have the means to record what the text behind the keysyou press on your keyboard is Screenshot manager A screenshot manager enables criminals to take screenshots of an individual’s computer screen Ad clicker Annoying but dangerous ad clickers direct victims’ computer to click on a specific harmful link Cybercrime targeting businesses Hacking Hacking is basically unauthorized access to computer data. Hackers inject specialist software with which they try to take administrative control of a computerized network or system. If the attack is successful, the stolen data can be sold on the dark web and compromise people’s integrity and safety by intruding and abusing the privacy of products as well as sensitive personal and business information. Hacking is particularly dangerous when it compromises the operation of systems that manage physical infrastructure, for example, public transportation. Distributed denial of service (DDoS) attacks When an online service is targeted by a DDoS attack, the communication links overflow with data from messages sent simultaneously by botnets. Botnets are a bunch of controlled computers that stop legitimate access to online services for users. The system is unable to provide normal access as it cannot handle the huge volume of incoming traffic. Cybercrime in relation to overall computer crime Many moons have passed since 2001, when the first international treatythat targeted Internet and computer crime—the Budapest Convention on Cybercrime—was adopted. The Convention’s intention was to harmonize national laws, improve investigative techniques, and increase cooperation among nations. It was drafted with the active participation of the Council of Europe's observer states Canada, Japan, South Africa, and the United States and drawn up by the Council of Europe in Strasbourg, France. Brazil and Russia, on the other hand, refused to sign the document on the basis of not being involved in the Convention's preparation. In The Understanding Cybercrime: A Guide to Developing Countries(Gercke, 2011), Marco Gercke makes an excellent final point: “Not all computer-related crimes come under the scope of cybercrime. Cybercrime is a narrower notion than all computer-related crime because it has to include a computer network. On the other hand, computer-related crime in general can also affect stand-alone computer systems.” Although progress has been made, consensus over the definition of cybercrime is not final. Keeping history in mind, a fluid and developing approach must be kept in mind when applying working and legal interpretations. In the end, international noncompliance must be overcome to establish a common and safe ground to tackle persistent threats. Cybercrime localized – what is the risk in your region? Europol’s heat map for the period between 2014 and 2015 reports on the geographical distribution of cybercrime on the basis of the United Nations geoscheme. The data in the report encompassed cyber-dependent crime and cyber-enabled fraud, but it did not include investigations into online child sexual abuse. North and South America Due to its overwhelming presence, it is not a great surprise that the North American region occupies several lead positions concerning cybercrime, both in terms of enabling malicious content and providing residency to victims in the regions that participate in the global cybercrime numbers. The United States hosted between 20% and nearly 40% of the total world's command-and-control servers during 2014. Additionally, the US currently hosts over 45% of the world's phishing domains and is in the pack of world-leading spam producers. Between 16% and 20% percent of all global bots are located in the United States, while almost a third of point-of-sale malware and over 40% of all ransomware incidents were detected there. Twenty EU member states have initiated criminal procedures in which the parties under suspicion were located in the United States. In addition, over 70 percent of the countries located in the Single European Payment Area have been subject to losses from skimmed payment cards because of the distinct way in which the US, under certain circumstances, processes card payments without chip-and-PIN technology. There are instances of cybercrime in South America, but the scope of participation by the southern continent is way smaller than that of its northern neighbor, both in industry reporting and in criminal investigations. Ecuador, Guatemala, Bolivia, Peru, and Brazil are constantly rated high on the malware infection scale, and the situation is not changing, while Argentina and Colombia remain among the top 10 spammer countries. Brazil has a critical role in point-of-sale malware, ATM malware, and skimming devices. Europe The key aspect making Europe a region with excellent cybercrime potential is the fast, modern, and reliable ICT infrastructure. According to The Internet Organised Crime Threat Assessment (IOCTA) 2015, Cybercriminals abuse Western European countries to host malicious content and launch attacks inside and outside the continent. EU countries host approximately 13 percent of the global malicious URLs, out of which Netherlands is the leading country, while Germany, the U.K., and Portugal come second, third, and fourth respectively. Germany, the U.K., the Netherlands, France, and Russia are important hosts for bot C&C infrastructure and phishing domains, while Italy, Germany, the Netherlands, Russia, and Spain are among the top sources of global spam. Scandinavian countries and Finland are famous for having the lowest malware infection rates. France, Germany, Italy, and to some extent the U.K. have the highest malware infection rates and the highest proportion of bots found within the EU. However, the findings are presumably the result of the high population of the aforementioned EU countries. A half of the EU member states identified criminal infrastructure or suspects in the Netherlands, Germany, Russia, or the United Kingdom. One third of the European law enforcement agencies confirmed connections to Austria, Belgium, Bulgaria, the Czech Republic, France, Hungary, Italy, Latvia, Poland, Romania, Spain, or Ukraine. Asia China is the United States' counterpart in Asia in terms of the top position concerning reported threats to Internet security. Fifty percent of the EU member states' investigations on cybercrime include offenders based in China. Moreover, certain authorities quote China as the source of one third of all global network attacks. In the company of India and South Korea, China is third among the top-10 countries hosting botnet C&C infrastructure, and it has one of the highest global malware infection rates. India, Indonesia, Malaysia, Taiwan, and Japan host serious bot numbers, too. Japan takes on a significant part both as a source country and as a victim of cybercrime. Apart from being an abundant spam source, Japan is included in the top three Asian countries where EU law enforcement agencies have identified cybercriminals. On the other hand, Japan, along with South Korea and the Philippines, is the most popular country in the East and Southeast region of Asia where organized crime groups run sextortion campaigns. Vietnam, India, and China are the top Asian countries featuring spamming sources. Alternatively, China and Hong Kong are the most prominent locations for hosting phishing domains. From another point of view, the country code top-level domains (ccTLDs) for Thailand and Pakistan are commonly used in phishing attacks. In this region, most SEPA members reported losses from the use of skimmed cards. In fact, five (Indonesia, Philippines, South Korea, Vietnam, and Malaysia) out of the top six countries are from this region. Africa Africa remains renowned for combined and sophisticated cybercrime practices. Data from the Europol heat map report indicates that the African region holds a ransomware-as-a-service presence equivalent to the one of the European black market. Cybercriminals from Africa make profits from the same products. Nigeria is on the list of the top 10 countries compiled by the EU law enforcement agents featuring identified cybercrime perpetrators and related infrastructure. In addition, four out of the top five top-level domains used for phishing are of African origin: .cf, .za, .ga, and .ml. Australia and Oceania Australia has two critical cybercrime claims on a global level: First, the country is present in several top-10 charts in the cybersecurity industry, including bot populations, ransomware detection, and network attack originators. Second, the country-code top-level domain for the Palau Islands in Micronesia is massively used by Chinese attackers as the TLD with the second highest proportion of domains used for phishing. Cybercrime in numbers Experts agree that the past couple of years have seen digital extortion flourishing. In 2015 and 2016, cybercrime reached epic proportions. Although there is agreement about the serious rise of the threat, putting each ransomware aspect into numbers is a complex issue. Underreporting is not an issue only in academic research but also in practical case scenarios. The threat to businesses around the world is growing, because businesses keep it quiet. The scope of extortion is obscured because companies avoid reporting and pay the ransom in order to settle the issue in a conducive way. As far as this goes for corporations, it is even more relevant for public enterprises or organizations that provide a public service of any kind. Government bodies, hospitals, transportation companies, and educational institutions are increasingly targeted with digital extortion. Cybercriminals estimate that these targets are likely to pay in order to protect drops in reputation and to enable uninterrupted execution of public services. When CEOs and CIOs keep their mouths shut, relying on reported cybercrime numbers can be a tricky question. The real picture is not only what is visible in the media or via professional networking, but also what remains hidden and is dealt with discreetly by the security experts. In the second quarter of 2015, Intel Security reported an increase in ransomware attacks by 58%. Just in the first 3 months of 2016, cybercriminals amassed $209 million from digital extortion. By making businesses and authorities pay the relatively small average ransom amount of $10,000 per incident, extortionists turn out to make smart business moves. Companies are not shaken to the core by this amount. Furthermore, they choose to pay and get back to business as usual, thus eliminating further financial damages that may arise due to being out of business and losing customers. Extortionists understand the nature of ransom payment and what it means for businesses and institutions. As sound entrepreneurs, they know their market. Instead of setting unreasonable skyrocketing prices that may cause major panic and draw severe law enforcement action, they keep it low profile. In this way, they maintain the dark business in flow, moving from one victim to the next and evading legal measures. A peculiar perspective – Cybercrime in absolute and normalized numbers “To get an accurate picture of the security of cyberspace, cybercrime statistics need to be expressed as a proportion of the growing size of the Internet similar to the routine practice of expressing crime as a proportion of a population, i.e., 15 murders per 1,000 people per year.” This statement by Eric Jardine from the Global Commission on Internet Governance (Jardine, 2015) launched a new perspective of cybercrime statistics, one that accounts for the changing nature and size of cyberspace. The approach assumes that viewing cybercrime findings isolated from the rest of the changes in cyberspace provides a distorted view of reality. The report aimed at normalizing crime statistics and thus avoiding negative, realistic cybercrime scenarios that emerge when drawing conclusions from the limited reliability of absolute numbers. In general, there are three ways in which absolute numbers can be misinterpreted: Absolute numbers can negatively distort the real picture, while normalized numbers show whether the situation is getting better Both numbers can show that things are getting better, but normalized numbers will show that the situation is improving more quickly Both numbers can indicate that things are deteriorating, but normalized numbers will indicate that the situation is deteriorating at a slower rate than absolute numbers Additionally, the GCIG (Global Commission on Internet Governance) report includes some excellent reasoning about the nature of empirical research undertaken in the age of the Internet. While almost everyone and anything is connected to the network and data can be easily collected, most of the information is fragmented across numerous private parties. Normally, this entangles the clarity of the findings of cybercrime presence in the digital world. When data is borrowed from multiple resources and missing slots are modified with hypothetical numbers, the end result can be skewed. Keeping in mind this observation, it is crucial to emphasize that the GCIG report measured the size of cyberspace by accounting for eight key aspects: The number of active mobile broadband subscriptions The number of smartphones sold to end users The number of domains and websites The volume of total data flow The volume of mobile data flow The annual number of Google searches The Internet’s contribution to GDP It has been illustrated several times during this introduction that as cyberspace grows, so does cybercrime. To fight the menace, businesses and individuals enhance security measures and put more money into their security budgets. A recent CIGI-Ipsos (Centre for International Governance Innovation - Ipsos) survey collected data from 23,376 Internet users in 24 countries, including Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey, and the United States. Survey results showed that 64% of users were more concerned about their online privacy compared to the previous year, whereas 78% were concerned about having their banking credentials hacked. Additionally, 77% of users were worried about cyber criminals stealing private images and messages. These perceptions led to behavioral changes: 43% of users started avoiding certain sites and applications, some 39% regularly updated passwords, while about 10% used the Internet less (CIGI-Ipsos, 2014). GCIC report results are indicative of a heterogeneous cyber security picture. Although many cybersecurity aspects are deteriorating over time, there are some that are staying constant, and a surprising number are actually improving. Jardine compares cyberspace security to trends in crime rates in a specific country operationalizing cyber attacks via 13 measures presented in the following table, as seen in Table 2 of Summary Statistics for the Security of Cyberspace(E. Jardine, GCIC Report, p. 6):    Minimum Maximum Mean Standard Deviation New Vulnerabilities 4,814 6,787 5,749 781.880 Malicious Web Domains 29,927 74,000 53,317 13,769.99 Zero-day Vulnerabilities 8 24 14.85714 6.336 New Browser Vulnerabilities 232 891 513 240.570 Mobile Vulnerabilities 115 416 217.35 120.85 Botnets 1,900,000 9,437,536 4,485,843 2,724,254 Web-based Attacks 23,680,646 1,432,660,467 907,597,833 702,817,362 Average per Capita Cost 188 214 202.5 8.893818078 Organizational Cost 5,403,644 7,240,000 6,233,941 753,057 Detection and Escalation Costs 264,280 455,304 372,272 83,331 Response Costs 1,294,702 1,738,761 1,511,804 152,502.2526 Lost Business Costs 3,010,000 4,592,214 3,827,732 782,084 Victim Notification Costs 497,758 565,020 565,020 30,342   While reading the table results, an essential argument must be kept in mind. Statistics for cybercrime costs are not available worldwide. The author worked with the assumption that data about US costs of cybercrime indicate costs on a global level. For obvious reasons, however, this assumption may not be true, and many countries will have had significantly lower costs than the US. To mitigate the assumption's flaws, the author provides comparative levels of those measures. The organizational cost of data breaches in 2013 in the United States was a little less than six million US dollars, while the average number on the global level, which was drawn from the Ponemon Institute’s annual Cost of Data Breach Study (from 2011, 2013, and 2014 via Jardine, p.7) measured the overall cost of data breaches, including the US ones, as US$2,282,095. The conclusion is that US numbers will distort global cost findings by expanding the real costs and will work against the paper's suggestion, which is that normalized numbers paint a rosier picture than the one provided by absolute numbers. Summary In this article, we have covered the birth and concept of cyber crime and the challenges law enforcement, academia, and security professionals face when combating its threatening behavior. We also explored the impact of cyber crime by numbers on varied geographical regions, industries, and devices. Resources for Article:  Further resources on this subject: Interactive Crime Map Using Flask [article] Web Scraping with Python [article]
Read more
  • 0
  • 0
  • 15520

article-image-article-movie-recommendation
Packt
16 Jun 2017
14 min read
Save for later

Article: Movie Recommendation

Packt
16 Jun 2017
14 min read
In this article by Robert Layton author of the book Learning Data Mining with Python - Second Edition is the second revision of Learning Data Mining with Python by Robert Layton improves upon the first book with updated examples, more in-depth discussion and exercises for your future development with data analytics. In this snippet from the book, we look at movie recommendation with a technique known as Affinity Analysis. (For more resources related to this topic, see here.) Affinity Analysis Affinity Analysis is the task of determining when objects are used in similar ways. We focused on whether the objects themselves are similar. The data for Affinity Analysis are often described in the form of a transaction. Intuitively, this comes from a transaction at a store—determining when objects are purchased together as a way to recommend products to users that they might purchase. Other use cases for Affinity Analysis include: Fraud detection Customer segmentation Software optimization Product recommendations Affinity Analysis is usually much more exploratory than classification. At the very least, we often simply rank the results and choose the top 5 recommendations (or some other number), rather than expect the algorithm to give us a specific answer. Algorithms for Affinity Analysis A brute force solution, testing all possible combinations, is not efficient enough for real-world use. We could expect even a small store to have hundreds of items for sale, while many online stores would have thousands (or millions!). As we add more items, the time it takes to compute all rules increases significantly faster. Specifically, the total possible number of rules is 2n - 1. Even the drastic increase in computing power couldn't possibly keep up with the increases in the number of items stored online. Therefore, we need algorithms that work smarter, as opposed to computers that work harder. The Apriori algorithm addresses the exponential problem of creating sets of items that occur frequently within a database, called frequent itemsets. Once these frequent itemsets are discovered, creating association rules is straightforward. The intuition behind Apriori is both simple and clever. First, we ensure that a rule has sufficient support within the dataset. Defining a minimum support level is the key parameter for Apriori. To build a frequent itemset, for an itemset (A, B) to have a support of at least 30, both A and B must occur at least 30 times in the database. This property extends to larger sets as well. For an itemset (A, B, C, D) to be considered frequent, the set (A, B, C) must also be frequent (as must D). Apriori discovers larger frequent itemsets by building off smaller frequent itemsets. The picture below outlines the full process: The Movie Recommendation Problem Product recommendation is a big business. Online stores use it to up-sell to customers by recommending other products that they could buy. Making better recommendations leads to better sales. When online shopping is selling to millions of customers every year, there is a lot of potential money to be made by selling more items to these customers. Grouplens, a research group at the University of Minnesota, has released several datasets that are often used for testing algorithms in this area. They have released several versions of a movie rating dataset, which have different sizes. There is a version with 100,000 reviews, one with 1 million reviews and one with 10 million reviews. The datasets are available from http://grouplens.org/datasets/movielens/ and the dataset we are going to use in this article is the MovieLens 100K dataset (with 100,000 reviews). Download this dataset and unzip it in your data folder. Start a new Jupyter Notebook and type the following code: import os import pandas as pd data_folder = os.path.join(os.path.expanduser("~"), "Data", "ml-100k") ratings_filename = os.path.join(data_folder, "u.data") Ensure that ratings_filename points to the u.data file in the unzipped folder. Loading with pandas The MovieLens dataset is in a good shape; however, there are some changes from the default options in pandas.read_csv that we need to make. When loading the file, we set the delimiter parameter to the tab character, tell pandas not to read the first row as the header (with header=None) and to set the column names with given values. Let's look at the following code: all_ratings = pd.read_csv(ratings_filename, delimiter="t", header=None, names = ["UserID", "MovieID", "Rating", "Datetime"]) While we won't use it in this article, you can properly parse the date timestamp using the following line. Dates for reviews can be an important feature in recommendation prediction, as movies that are rated together often have more similar rankings than movies ranked separately. Accounting for this can improve models significantly. all_ratings["Datetime"] = pd.to_datetime(all_ratings['Datetime'], unit='s') Understanding the Apriori algorithm and its implementation The goal of this article is to produce rules of the following form: if a person recommends this set of movies, they will also recommend this movie. We will also discuss extensions where a person recommends a set of movies is likely to recommend another particular movie. To do this, we first need to determine if a person recommends a movie. We can do this by creating a new feature Favorable, which is True if the person gave a favorable review to a movie: all_ratings["Favorable"] = all_ratings["Rating"] > 3 We will sample our dataset to form a training data. This also helps reduce the size of the dataset that will be searched, making the Apriori algorithm run faster. We obtain all reviews from the first 200 users: ratings = all_ratings[all_ratings['UserID'].isin(range(200))] Next, we can create a dataset of only the favorable reviews in our sample: favorable_ratings = ratings[ratings["Favorable"]] We will be searching the user's favorable reviews for our itemsets. So, the next thing we need is the movies which each user has given a favorable rating. We can compute this by grouping the dataset by the UserID and iterating over the movies in each group: favorable_reviews_by_users = dict((k, frozenset(v.values)) for k, v in favorable_ratings.groupby("UserID")["MovieID"]) In the preceding code, we stored the values as a frozenset, allowing us to quickly check if a movie has been rated by a user. Sets are much faster than lists for this type of operation, and we will use them in a later code. Finally, we can create a DataFrame that tells us how frequently each movie has been given a favorable review: num_favorable_by_movie = ratings[["MovieID", "Favorable"]].groupby("MovieID").sum() We can see the top five movies by running the following code: num_favorable_by_movie.sort_values(by="Favorable", ascending=False).head() Implementing the Apriori algorithm On the first iteration of Apriori, the newly discovered itemsets will have a length of 2, as they will be supersets of the initial itemsets created in the first step. On the second iteration (after applying the fourth step and going back to step 2), the newly discovered itemsets will have a length of 3. This allows us to quickly identify the newly discovered itemsets, as needed in the second step. We can store our discovered frequent itemsets in a dictionary, where the key is the length of the itemsets. This allows us to quickly access the itemsets of a given length, and therefore the most recently discovered frequent itemsets, with the help of the following code: frequent_itemsets = {} We also need to define the minimum support needed for an itemset to be considered frequent. This value is chosen based on the dataset but try different values to see how that affects the result. I recommend only changing it by 10 percent at a time though, as the time the algorithm takes to run will be significantly different! Let's set a minimum support value: min_support = 50 To implement the first step of the Apriori algorithm, we create an itemset with each movie individually and test if the itemset is frequent. We use frozenset, as they allow us to perform faster set-based operations later on, and they can also be used as keys in our counting dictionary (normal sets cannot). Let's look at the following example of frozenset code: frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) We implement the second and third steps together for efficiency by creating a function that takes the newly discovered frequent itemsets, creates the supersets, and then tests if they are frequent. First, we set up the function to perform these steps: from collections import defaultdict def find_frequent_itemsets(favorable_reviews_by_users, k_1_itemsets, min_support): counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for itemset in k_1_itemsets: if itemset.issubset(reviews): for other_reviewed_movie in reviews - itemset: current_superset = itemset | frozenset((other_reviewed_movie,)) counts[current_superset] += 1 return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support]) In keeping with our rule of thumb of reading through the data as little as possible, we iterate over the dataset once per call to this function. While this doesn't matter too much in this implementation (our dataset is relatively small compared to the average computer), single-pass is a good practice to get into for larger applications. Let's have a look at the core of this function in detail. We iterate through each user, and each of the previously discovered itemsets, and then check if it is a subset of the current set of reviews, which are stored in k_1_itemsets (note that here, k_1 means k-1). If it is, this means that the user has reviewed each movie in the itemset. This is done by the itemset.issubset(reviews) line. We can then go through each individual movie that the user has reviewed (that is not already in the itemset), create a superset by combining the itemset with the new movie and record that we saw this superset in our counting dictionary. These are the candidate frequent itemsets for this value of k. We end our function by testing which of the candidate itemsets have enough support to be considered frequent and return only those that have a support more than our min_support value. This function forms the heart of our Apriori implementation and we now create a loop that iterates over the steps of the larger algorithm, storing the new itemsets as we increase k from 1 to a maximum value. In this loop, k represents the length of the soon-to-be discovered frequent itemsets, allowing us to access the previously most discovered ones by looking in our frequent_itemsets dictionary using the key k - 1. We create the frequent itemsets and store them in our dictionary by their length. Let's look at the code: for k in range(2, 20): # Generate candidates of length k, using the frequent itemsets of length k-1 # Only store the frequent itemsets cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users, frequent_itemsets[k-1], min_support) if len(cur_frequent_itemsets) == 0: print("Did not find any frequent itemsets of length {}".format(k)) sys.stdout.flush() break else: print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k)) sys.stdout.flush() frequent_itemsets[k] = cur_frequent_itemsets Extracting association rules After the Apriori algorithm has completed, we have a list of frequent itemsets. These aren't exactly association rules, but they can easily be converted into these rules. For each itemset, we can generate a number of association rules by setting each movie to be the conclusion and the remaining movies as the premise.  candidate_rules = [] for itemset_length, itemset_counts in frequent_itemsets.items(): for itemset in itemset_counts.keys(): for conclusion in itemset: premise = itemset - set((conclusion,)) candidate_rules.append((premise, conclusion)) In these rules, the first partis the list of movies in the premise, while the number after it is the conclusion. In the first case, if a reviewer recommends movie 79, they are also likely to recommend movie 258. The process of computing confidence starts by creating dictionaries to store how many times we see the premise leading to the conclusion (a correct example of the rule) and how many times it doesn't (an incorrect example). We then iterate over all reviews and rules, working out whether the premise of the rule applies and, if it does, whether the conclusion is accurate. correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 We then compute the confidence for each rule by dividing the correct count by the total number of times the rule was seen: rule_confidence = {candidate_rule: (correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule])) for candidate_rule in candidate_rules} Now we can print the top five rules by sorting this confidence dictionary and printing the results: from operator import itemgetter sorted_confidence = sorted(rule_confidence.items(), key=itemgetter(1), reverse=True) for index in range(5): print("Rule #{0}".format(index + 1)) premise, conclusion = sorted_confidence[index][0] print("Rule: If a person recommends {0} they will also recommend {1}".format(premise, conclusion)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") The resulting printout shows only the movie IDs, which isn't very helpful without the names of the movies also. The dataset came with a file called u.items, which stores the movie names and their corresponding MovieID (as well as other information, such as the genre). We can load the titles from this file using pandas. Additional information about the file and categories is available in the README file that came with the dataset. The data in the files is in CSV format, but with data separated by the | symbol; it has no header and the encoding is important to set. The column names were found in the README file. movie_name_filename = os.path.join(data_folder, "u.item") movie_name_data = pd.read_csv(movie_name_filename, delimiter="|", header=None, encoding = "mac-roman") movie_name_data.columns = ["MovieID", "Title", "Release Date", "Video Release", "IMDB", "<UNK>", "Action", "Adventure", "Animation", "Children's", "Comedy", "Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"] Let's also create a helper function for finding the name of a movie by its ID: def get_movie_name(movie_id): title_object = movie_name_data[movie_name_data["MovieID"] == movie_id]["Title"] title = title_object.values[0] return title We can now adjust our previous code for printing out the top rules to also include the titles: for index in range(5): print("Rule #{0}".format(index + 1)) premise, conclusion = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") The results gives a recommendation for movies, based on previous movies that person liked. Give it a shot and see if it matches your expectations! Learning Data Mining with Python In this short section of Learning Data Mining with Python, Revision 2, we performed Affinity Analysis in order to recommend movies based on a large set of reviewers. We did this in two stages. First, we found frequent itemsets in the data using the Apriori algorithm. Then, we created association rules from those itemsets. We performed training on a subset of our data in order to find the association rules, and then tested those rules on the rest of the data—a testing set. We could extend this concept to use cross-fold validation to better evaluate the rules. This would lead to a more robust evaluation of the quality of each rule. We cover topics such as classification, clusters, text analysis, image recognition, TensorFlow and Big Data. Each section comes with a practical real-world example, steps through the code in detail and provides suggestions for your to continue your (machine) learning. Summary In this article we have covered more in-depth discussion and exercises for your future development with data analytics. In this snippet from the book, we look at movie recommendation with a technique known as Affinity Analysis. The most recent upgrades to the HTMLG online editor are the tag manager and the attribute filter. Try it for free and purchase a subscription if you like it! Resources for Article: Further resources on this subject: Expanding Your Data Mining Toolbox [article] Data mining [article] Big Data Analysis [article]
Read more
  • 0
  • 0
  • 3366

article-image-understanding-puppet-resources
Packt
16 Jun 2017
15 min read
Save for later

Understanding the Puppet Resources

Packt
16 Jun 2017
15 min read
A little learning is a dangerous thing, but a lot of ignorance is just as bad. —Bob Edwards In this article by John Arundel, the author of Puppet 4.10 Beginner’s Guide - Second Edition, we’ll go into details of packages, files, and services to see how to exploit their power to the full. Along the way, we’ll talk about the following topics: Managing files, directories, and trees Ownership and permissions Symbolic links Installing and uninstallingpackages Specific and latest versions of packages Installing Ruby gems Services: hasstatus and pattern Services: hasrestart, restart, stop, and start (For more resources related to this topic, see here.) Files Puppet can manage files on the server using the file resource, and the following example sets the contents of a file[TJ1] to a particular string using the content attribute (file_hello.pp): file { ‘/tmp/hello.txt’: content =>“hello, worldn”, } Managing whole files While it’s useful to be able to set the contents of a file to a short text string, most files we’re likely to want to manage, will be too large to include directly in our Puppet manifests. Ideally, we would put a copy of the file in the Puppet repo, and have Puppet simply copy it to the desired place in the filesystem. The source attribute (file_source.pp)does exactly that: file { ‘/etc/motd’: source =>‘/vagrant/examples/files/motd.txt’, } To try this example with your Vagrant box, run the following commands: sudo puppet apply /vagrant/examples/file_source.pp cat /etc/motd The best software in the world only sucks. The worst software is significantly worse than that. -Luke Kanies To run such examples, just apply them using sudo puppet apply as shown in the preceding example. Why do we have to run sudo puppet apply instead of just puppet apply? Puppet has the permissions of the user who runs it, so if Puppet needs to modify a file owned by root, it must be run with the root’s permissions (which is what sudo does). You will usually run Puppet as root because it needs those permissions to do things such as installing packagesand modifying config files owned by root. The value of the source attribute can be a path to a file on the server, as here, or an HTTP URL, as shown in the following example (file_http.pp): file { ‘/tmp/README.md’: source =>‘https://raw.githubusercontent.com/puppetlabs/puppet/master/README.md’, } Although this is a handy feature, bear in mind that every time you add an external dependency like this to your Puppet manifest, you’re adding a potential point of failure. Wherever you can, use a local copy of such a file instead of having Puppet fetch it remotely every time. This particularly applies to software which needs to be built from a tarball downloaded from a website. If possible, download the tarball and serve it from a local web server or file server. If this isn’t practical, using a caching proxy server can help save time and bandwidth when you’re building a large number of machines. Ownership On Unix-like systems, files are associated with an owner, a group, and a set of permissions to read, write, or execute the file. Since we normally run Puppet with the permissions of the root user (via sudo), the files Puppet manages will be owned by that user: ls -l /etc/motd -rw-r--r-- 1 root root 109 Aug 31 04:03 /etc/motd Often, this is just fine, but if we need the file to belong to another user (for example, if that user needs to be able to write to the file), we can express this by setting the owner attribute (file_owner.pp): file { ‘/etc/owned_by_vagrant’: ensure => present, owner =>‘vagrant’, } Run the following command: ls -l /etc/owned_by_vagrant -rw-r--r-- 1 vagrant root 0 Aug 31 04:48 /etc/owned_by_vagrant You can see that Puppet has created the file and its owner attribute has been set to vagrant. You can also set the group ownership of the file using the group attribute (file_group.pp): file { ‘/etc/owned_by_vagrant’: ensure => present, owner =>‘vagrant’, group =>‘vagrant’, } Run the following command: ls -l /etc/owned_by_vagrant -rw-r--r-- 1 vagrant vagrant 0 Aug 31 04:48 /etc/owned_by_vagrant This time, we didn’t specify either a content or source attribute for the file, but simply ensure => present. In this case, Puppet will create a file of zero size (useful, for example, if you want to make sure the file exists and is writeable, but doesn’t need to have any contents yet). Permissions Files on Unix-like systems have an associated mode, which determines access permissions for the file. It governs read, write, and execute permissions for the file’s owner, any user in the file’s group, and other users. Puppet supports setting permissions on files using the mode attribute. This takes an octal value, with each digit representing the permissions for owner, group, and other, in that order. In the following example, we use the mode attribute to set a mode of 0644 (read and write for owner, read-only for group, read-only for other) on a file (file_mode.pp): file { ‘/etc/owned_by_vagrant’: ensure => present, owner =>‘vagrant’, mode =>‘0644’, } This will be quite familiar to experienced system administrators, as the octal values for file permissions are exactly the same as those understood by the Unixchmod command. For more information, run theman chmod command. Directories Creating or managing permissions on a directory is a common task, and Puppet uses the file resource to do this too. If the value of the ensure attribute is directory, the file will be a directory (file_directory.pp): file { ‘/etc/config_dir’: ensure => directory, } As with regular files, you can use the owner, group, and mode attributes to control access to directories. Trees of files Puppet can copy a single file to the server, but what about a whole directory of files, possibly including subdirectories (known as a file tree)? The recurse attribute will take care of this (file_tree.pp): file { ‘/etc/config_dir’: source =>‘/vagrant/examples/files/config_dir’, recurse => true, } Run the following command: ls /etc/config_dir/ 1 2 3 When recurse attribute is true, Puppet will copy all the files and directories (and their subdirectories) in the source directory (/vagrant/examples/files/config_dir in this example) to the target directory (/etc/config_dir). If the target directory already exists and has files in it, Puppet will not interfere with them, but you can change this behavior using the purge attribute.[JR4]  If this is true, Puppet will delete any files and directories in the target directory which are not present in the source directory. Use this attribute with care! Symbolic links Another common requirement for managing files is to create or modify a symbolic link (known as a symlink for short). You can have Puppet do this by setting ensure => link on the file resource, and specifying the target attribute (file_symlink.pp): file { ‘/etc/this_is_a_link’: ensure => link, target =>‘/etc/motd’, } Run the following command: ls -l /etc/this_is_a_link lrwxrwxrwx 1 root root 9 Aug 31 05:05 /etc/this_is_a_link -> /etc/motd Packages To install a package usethe package resource, and this is all you need to do with most packages. However, the package resource has a few extra features which may be useful. Uninstalling packages The ensure attribute normally takes the installedvalue in order to install a package, but if you specify absent instead, Puppet will remove the package if it happens to be installed. Otherwise, it will take no action. The following example will remove the apparmor package if it’s installed (package_remove.pp): package { ‘apparmor’: ensure => absent, } Installing specific versions If there are multiple versions of a package available to the system’s package manager, specifying ensure => installed will cause Puppet to install the default version (usually the latest). But if you need a specific version, you can specify that version string as the value of ensure, and Puppet will install that version (package_version.pp): package { ‘openssl’: ensure =>‘1.0.2g-1ubuntu4.2’, } It’s a good idea to specify an exact version whenever you manage packages with Puppet, so that all servers will get the same version of a given package. Otherwise, if you use ensure => installed, they will just get whatever version was current at the time they were built, leading to a situation where different machines have different package versions. When a newer version of the package is released, and you decide it’s time to upgrade to it, you can update the version string specified in the Puppet manifest and Puppet will upgrade the package everywhere. Installing the latest version On the other hand, if you specify ensure => latest for a package, Puppet will make sure that the latest available version is installed every time it runs. When a new version of the package becomes available, it will be installed automatically on the next Puppet run. This is not generally what you want when using a package repository that’s not under your control (for example, the main Ubuntu repository). It means that packages will be upgraded at unexpected times, which may break your application (or at least result in unplanned downtime). A better strategy is to tell Puppet to install a specific version that you know works, and test upgrades in a controlled environment before rolling them out to production. If you maintain your own package repository, and control the release of new packages to it, ensure => latest can be a useful feature: Puppet will update a package as soon as you push a new version to the repo. If you are relying on upstream repositories, such as the Ubuntu repositories, it’s better to tell Puppet to install a specific version and upgrade that as necessary.[JR10]  Installing Ruby gems Although the package resource is most often used to install packages using the normal system package manager (in the case of Ubuntu, that’s APT), it can install other kinds of packages as well. Library packages for the Ruby programming language are known as gems. Puppet can install Ruby gems for you using the provider => gem attribute (package_gem.pp): package { ‘ruby’: ensure => installed, } package { ‘bundler’: ensure => installed, provider => gem, } In the preceding code, bundler is a Ruby gem, and therefore we have to specify provider => gem for this package so that Puppet doesn’t think it’s a standard system package and try to install it via APT. Since the gem provider is not available unless Ruby is installed, we install the ruby package first, and then the bundler gem. Installing gems in Puppet’s context Puppet itself is written at least partly in Ruby, and makes use of several Ruby gems. To avoid any conflicts with the version of Ruby and gems, which the server might need for other applications, Puppet packages its own version of Ruby and associated gems under the /etc/puppetlabs directory. This means you can install (or remove) whichever version of Ruby you like, and Puppet will not be affected. However, if you need to install a gem to extend Puppet’s capabilities in some way, then doing it with a package resource and provider => gem won’t work. That is, the gem will be installed, but only in the system Ruby context, and it won’t be visible to Puppet. Fortunately, the puppet_gem provider is available for exactly this purpose. When you use this provider, the gem will be installed in Puppet’s context (and, naturally, won’t be visible in the system context). The following example demonstrates how to use this provider (package_puppet_gem.pp): package { ‘hiera-eyaml’: ensure => installed, provider => puppet_gem, } To see the gems installed in Puppet’s context, use Puppet’s own version of the gem command, with the following path: /opt/puppetlabs/puppet/bin/gem list Services Although services are implemented in a number of varied and complicated ways at the operating system level, Puppet does a good job of abstracting away most of this with the service resource and exposing just the two attributes of services which really matter , whether they’re running (ensure) and whether they start at boot time (enable). However, you’ll occasionally encounter services that don’t play well with Puppet, for a variety of reasons. Sometimes, Puppet is unable to detect that the service is already running, and keeps trying to start it. At other times, Puppet may not be able to properly restart the service when a dependent resource changes. There are a few useful attributes for service resources that can help resolve these problems. The hasstatus attribute When a service resource has theensure => running attribute, Puppet needs to be able to check whether the service is, in fact, running. The way it does this depends on the underlying operating system, but on Ubuntu 16+, for example, it runs systemctl is-active SERVICE. If the service is packaged to work with systemd, that should be just fine, but in many cases, particularly with older software, it may not respond properly. If you find that Puppet keeps attempting to start the service on every Puppet run, even though the service is running, it may be that Puppet’s default service status detection isn’t working. In this case, you can specify the hasstatus => false attribute for the service (service_hasstatus.pp): service { ‘ntp’: ensure =>running, enable => true, hasstatus => false, } When hasstatus is false, Puppet knows not to try to check the service status using the default system service management command, and instead will look in the process table for a running process thatmatches the name of the service. If it finds one, it will infer that the service is running and take no further action. The pattern attribute Sometimes when using hasstatus => false, the service name as defined in Puppet doesn’t actually appear in the process table because the command that provides the service has a different name. If this is the case, you can tell Puppet exactly what to look for using the pattern attribute (service_pattern.pp): service { ‘ntp’: ensure =>running, enable => true, hasstatus => false, pattern =>‘ntpd’, } If hasstatus is false and pattern is specified, Puppet will search for the value of pattern in the process table to determine whether or not the service is running. To find the pattern you need, you can use the ps command to see the list of running processes: ps ax The hasrestart and restart attributes When a service is notified (for example, if a file resource uses the notify attribute to tell the service its config file has changed), Puppet’s default behavior is to stop the service, then start it again. This usually works, but many services implement a restart command in their management scripts. If this is available, it’s usually a good idea to use itas it may be faster or safer than stopping and starting the service. Some services take a while to shut down properly when stopped, for example, and Puppet may not wait long enough before trying to restart them, so that you end up with the service not running at all. If you specify hasrestart => true for a service, then Puppet will try to send a restart command to it, using whatever service management command is appropriate (systemctl, for example). The following example shows the use of hasrestart (service_hasrestart.pp): service { ‘ntp’: ensure =>running, enable => true, hasrestart => true, } To further complicate things, the default system service restart command may not work, or you may need to take certain special actions when the service is restarted (disabling monitoring notifications, for example). You can specify any restart command you like for the service using the restart attribute (service_custom_restart.pp): service { ‘ntp’: ensure => running, enable => true, restart =>‘/bin/echo Restarting >>/tmp/debug.log && systemctl restart ntp’, } In this example, the restart command writes a message to a log file before restarting the service in the usual way, but it could, of course, do anything you need it to. In the extremely rare event that the service cannot be stopped or started using the default service management command, Puppet also provides the stop and start attributes so that you can specify custom commands to stop and start the service, in just the same way as with the restart attribute. If you need to use either of these, though, it’s probably safe to say that you’re having a bad day.  Summary In this article, we explored Puppet’s file resource in detail, covering file sources, ownership, permissions, directories, symbolic links, and file trees. You learned how to manage packages by installing specific versions, or the latest version, and how to uninstall packages. We also covered Ruby gems, both in the system context and Puppet’s internal context. We looked at service resources, including the has status, pattern, has restart, restart, stop, and start attributes. Resources for Article: Further resources on this subject: My First Puppet Module [article] Puppet Language and Style [article] External Tools and the Puppet Ecosystem [article]
Read more
  • 0
  • 0
  • 5621

Packt
16 Jun 2017
9 min read
Save for later

Streaming and the Actor Model – Akka Streams!

Packt
16 Jun 2017
9 min read
In this article by Piyush Mishra, author of the Akka Cookbook, we will learn about the streaming and the actor model with Akka streams. (For more resources related to this topic, see here.) Akka is a popular toolkit designed to ease the pain of dealing with concurrency and distributed systems. It provides easy APIs to create reactive, fault-tolerant, scalable, and concurrent applications, thanks to the actor model. The actor model was introduced by Carl Hewitt in the 70s, and it has been successfully implemented by different programming languages, frameworks, or toolkits, such as Erlang or Akka. The concepts around the actor model are simple. All actors are created inside an actor system. Every actor has a unique address within the actor system, a mailbox, a state (in the case of being a stateful actor) and a behavior. The only way of interacting with an actor is by sending messages to it using its address. Messages will be stored in the mailbox until the actor is ready to process them. Once it is ready, the actor will pick one message at a time and will execute its behavior against the message. At this point, the actor might update its state, create new actors, or send messages to other already-created actors. Akka provides all this and many other features, thanks to the vast ecosystem around the core component, such as Akka Cluster, Akka Cluster Sharding, Akka Persistence, Akka HTTP, or Akka Streams. We will dig a bit more into the later one. Streaming framework and toolkits are gaining momentum lately. This is motivated by the massive number of connected devices that are generating new data constantly that needs to be consumed, processed, analyzed, and stored. This is basically the idea of Internet of Things (IoT) or the newer term Internet of Everything. Some time ago, the Akka team decided that they could build a Streaming library leveraging all the power of Akka and the actor model: Akka Streams. Akka Streams uses Akka actors as its foundation to provide a set of easy APIs to create back-pressured streams. Each stream consists of one or more sources, zero or more flows, and one or more sinks. All these different modules are also known as stages in the Akka Streams terminology. The best way to understand how a stream works is to think about it as a graph. Each stage (source, flow, or sink) has zero or more input ports and zero or more output ports. For instance, a source has zero input ports and one output port. A flow has one input port and one output port. And finally, a sink has one input port and zero output ports. To have a runnable stream, we need to ensure that all ports of all our stages are connected. Only then, we can run our stream to process some elements: Akka Streams provides a rich set of predefined stages to cover the most common streaming functions. However, if a use case requires a new custom stage, it is also possible to create it from scratch or extend an existing one. The full list of predefined stages can be found at http://doc.akka.io/docs/akka/current/scala/stream/stages-overview.html. Now that we know about the different components Akka Streams provides, it is a good moment to introduce the actor materializer. As we mentioned earlier, Akka is the foundation of Akka Streams. This means the code you define in the high-level API is eventually run inside an actor. The actor materializer is the entity responsible to create these low-level actors. By default, all processing stages get created within the same actor. This means only one element at a time can be processed by your stream. It is also possible to indicate that you want to have a different actor per stage, therefore having the possibility to process multiple messages at the same time. You can indicate this to the materializer by calling the async method in the proper stage. There are also asynchronous predefined stages. For performance reasons, Akka Streams batches messages when pushing them to the next stage to reduce overhead. After this quick introduction, let's start putting together some code to create and run a stream. We will use the Scala build tool (famously known as sbt) to retrieve the Akka dependencies and run our code. To begin with, we need a build.sbt file with the following content: name := "akka-async-streams" version := "1.0" scalaVersion := "2.11.7" libraryDependencies += "com.typesafe.akka" % "akka-actor_2.11" % "2.4.17" libraryDependencies += "com.typesafe.akka" % "akka-stream_2.11" % "2.4.17" Once we have the file ready, we need to run sbt update to let sbt fetch the required dependencies. Our first stream will push a list of words, capitalize each of them, and log the resulting values. This can easily be achieved by doing the following: implicit val actorSystem = ActorSystem() implicit val actorMaterializer = ActorMaterializer() val stream = Source(List("hello","from","akka","streams!")) .map(_.capitalize) .to(Sink.foreach(actorSystem.log.info)) stream.run() In this small code snippet, we can see how our stream has one source with a list of strings, one flow that is capitalizing each stream, and finally one sink logging the result. If we run our code, we should see the following in the output: [INFO] [default-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(default)] Hello [INFO] [default-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(default)] From [INFO] [default-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(default)] Akka [INFO] [default-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(default)] Streams! The execution of this stream is happening synchronously and ordered. In our next example, we will do the same stream; however, we can see how all stages are modular: implicit val actorSystem = ActorSystem() implicit val actorMaterializer = ActorMaterializer() val source = Source(List("hello","from","akka","streams!")) val sink = Sink.foreach(actorSystem.log.info) val capitalizer = Flow[String].map(_.capitalize) val stream = source.via(capitalizer).to(sink) stream.run() In this code snippet, we can see how stages can be treated as immutable modules. We see that we can use the via helper method to provide a flow stage in a stream. This stream is still running synchronously. To run it asynchronously, we can take advantage of the mapAsync flow. For this, let's create a small actor that will do the capitalization for us: class Capitalizer extends Actor with ActorLogging { def receive = { case str : String => log.info(s"Capitalizing $str") sender ! str.capitalize } } Once we have our actor defined, we can set up our asynchronous stream. For this, we will create a round robin pool of capitalizer actors. Then, we will use the ask pattern to send a message to an actor and wait for a response. This happens using the operator? The stream definition will be something like this: implicit val actorSystem = ActorSystem() implicit val actorMaterializer = ActorMaterializer() implicit val askTimeout = Timeout(5 seconds) val capitalizer = actorSystem.actorOf(Props[Capitalizer].withRouter(RoundRobinPool(10))) val source = Source(List("hello","from","akka","streams!")) val sink = Sink.foreach(actorSystem.log.info) val flow = Flow[String].mapAsync(parallelism = 5)(elem => (capitalizer ? elem).mapTo[String]) val stream = source.via(flow).to(sink) stream.run() If we execute this small piece of code, we can see something similar: [INFO] [default-akka.actor.default-dispatcher-16] [akka://default/user/$a/$a] Capitalizing hello [INFO] [default-akka.actor.default-dispatcher-15] [akka://default/user/$a/$b] Capitalizing from [INFO] [default-akka.actor.default-dispatcher-6] [akka://default/user/$a/$c] Capitalizing akka [INFO] [default-akka.actor.default-dispatcher-14] [akka://default/user/$a/$d] Capitalizing streams! [INFO] [default-akka.actor.default-dispatcher-14] [akka.actor.ActorSystemImpl(default)] Hello [INFO] [default-akka.actor.default-dispatcher-14] [akka.actor.ActorSystemImpl(default)] From [INFO] [default-akka.actor.default-dispatcher-14] [akka.actor.ActorSystemImpl(default)] Akka [INFO] [default-akka.actor.default-dispatcher-14] [akka.actor.ActorSystemImpl(default)] Streams! We can see how each word is being processed by a different capitalizer actor ($a/$b/$c/$d) and by different threads (default-dispatcher 16,15,6 and 14). Even if these executions are happening asynchronously in the pool of actors, the stream is still maintaining the order of the elements. If we do not need to maintain order and we are looking for a faster approach, where an element can be pushed to the next stage in the stream as soon as it is ready, we can use mapAsyncUnordered: implicit val actorSystem = ActorSystem() implicit val actorMaterializer = ActorMaterializer() implicit val askTimeout = Timeout(5 seconds) val capitalizer = actorSystem.actorOf(Props[Capitalizer].withRouter(RoundRobinPool(10))) val source = Source(List("hello","from","akka","streams!")) val sink = Sink.foreach(actorSystem.log.info) val flow = Flow[String].mapAsyncUnordered(parallelism = 5)(elem => (capitalizer ? elem).mapTo[String]) val stream = source.via(flow).to(sink) stream.run() When running this code, we can see that the order is not preserved and the capitalized words arrive to the sink differently every time we execute our code. Consider the following example: [INFO] [default-akka.actor.default-dispatcher-10] [akka://default/user/$a/$b] Capitalizing from [INFO] [default-akka.actor.default-dispatcher-4] [akka://default/user/$a/$d] Capitalizing streams! [INFO] [default-akka.actor.default-dispatcher-13] [akka://default/user/$a/$c] Capitalizing akka [INFO] [default-akka.actor.default-dispatcher-14] [akka://default/user/$a/$a] Capitalizing hello [INFO] [default-akka.actor.default-dispatcher-12] [akka.actor.ActorSystemImpl(default)] Akka [INFO] [default-akka.actor.default-dispatcher-12] [akka.actor.ActorSystemImpl(default)] From [INFO] [default-akka.actor.default-dispatcher-12] [akka.actor.ActorSystemImpl(default)] Hello [INFO] [default-akka.actor.default-dispatcher-12] [akka.actor.ActorSystemImpl(default)] Streams! Akka Streams also provides a graph DSL to define your stream. In this DSL, it is possible to connect stages just using the ~> operator: implicit val actorSystem = ActorSystem() implicit val actorMaterializer = ActorMaterializer() implicit val askTimeout = Timeout(5 seconds) val capitalizer = actorSystem.actorOf(Props[Capitalizer].withRouter(RoundRobinPool(10))) val graph = RunnableGraph.fromGraph(GraphDSL.create() { implicit b => import GraphDSL.Implicits._ val source = Source(List("hello","from","akka","streams!")) val sink = Sink.foreach(actorSystem.log.info) val flow = Flow[String].mapAsyncUnordered(parallelism = 5)(elem => (capitalizer ? elem).mapTo[String]) source ~> flow ~> sink ClosedShape }) graph.run() These code snippets show only a few features of the vast available options inside the Akka Streams framework. Actors can be seamlessly integrated with streams. This brings a whole new set of possibilities to process things in a stream fashion. We have seen how we can preserve or avoid order of elements, either synchronously or asynchronously. In addition, we saw how to use the graph DSL to define our stream. Summary In this article, we covered the concept of the actor model and the core components of Akka. We also described the stages in Akka Streams and created an example code for stream. If you want to learn more about Akka, Akka Streams, and all other modules around them, you can find useful and handy recipes like these ones in the Akka Cookbook at https://www.packtpub.com/application-development/akka-cookbook.  Resources for Article: Further resources on this subject: Creating First Akka Application [article] Working with Entities in Google Web Toolkit 2 [article] Skinner's Toolkit for Plone 3 Theming (Part 1) [article]
Read more
  • 1
  • 0
  • 4966

article-image-exploring-functions
Packt
16 Jun 2017
12 min read
Save for later

Exploring Functions

Packt
16 Jun 2017
12 min read
In this article by Marius Bancila, author of the book Modern C++ Programming Cookbook covers the following recipes: Defaulted and deleted functions Using lambdas with standard algorithms (For more resources related to this topic, see here.) Defaulted and deleted functions In C++, classes have special members (constructors, destructor and operators) that may be either implemented by default by the compiler or supplied by the developer. However, the rules for what can be default implemented are a bit complicated and can lead to problems. On the other hand, developers sometimes want to prevent objects to be copied, moved or constructed in a particular way. That is possible by implementing different tricks using these special members. The C++11 standard has simplified many of these by allowing functions to be deleted or defaulted in the manner we will see below. Getting started For this recipe, you need to know what special member functions are, and what copyable and moveable means. How to do it... Use the following syntax to specify how functions should be handled: To default a function use =default instead of the function body. Only special class member functions that have defaults can be defaulted. struct foo { foo() = default; }; To delete a function use =delete instead of the function body. Any function, including non-member functions, can be deleted. struct foo { foo(foo const &) = delete; }; void func(int) = delete; Use defaulted and deleted functions to achieve various design goals such as the following examples: To implement a class that is not copyable, and implicitly not movable, declare the copy operations as deleted. class foo_not_copiable { public: foo_not_copiable() = default; foo_not_copiable(foo_not_copiable const &) = delete; foo_not_copiable& operator=(foo_not_copiable const&) = delete; }; To implement a class that is not copyable, but it is movable, declare the copy operations as deleted and explicitly implement the move operations (and provide any additional constructors that are needed). class data_wrapper { Data* data; public: data_wrapper(Data* d = nullptr) : data(d) {} ~data_wrapper() { delete data; } data_wrapper(data_wrapper const&) = delete; data_wrapper& operator=(data_wrapper const &) = delete; data_wrapper(data_wrapper&& o) :data(std::move(o.data)) { o.data = nullptr; } data_wrapper& operator=(data_wrapper&& o) { if (this != &o) { delete data; data = std::move(o.data); o.data = nullptr; } return *this; } }; To ensure a function is called only with objects of a specific type, and perhaps prevent type promotion, provide deleted overloads for the function (the example below with free functions can also be applied to any class member functions). template <typename T> void run(T val) = delete; void run(long val) {} // can only be called with long integers How it works... A class has several special members that can be implemented by default by the compiler. These are the default constructor, copy constructor, move constructor, copy assignment, move assignment and destructor. If you don't implement them, then the compiler does it, so that instances of a class can be created, moved, copied and destructed. However, if you explicitly provide one or more, then the compiler will not generate the others according to the following rules: If a user defined constructor exists, the default constructor is not generated by default. If a user defined virtual destructor exists, the default constructor is not generated by default. If a user-defined move constructor or move assignment operator exist, then the copy constructor and copy assignment operator are not generated by default. If a user defined copy constructor, move constructor, copy assignment operator, move assignment operator or destructor exist, then the move constructor and move assignment operator are not generated by default. If a user defined copy constructor or destructor exists, then the copy assignment operator is generated by default. If a user-defined copy assignment operator or destructor exists, then the copy constructor is generated by default. Note that the last two are deprecated rules and may no longer be supported by your compiler. Sometimes developers need to provide empty implementations of these special members or hide them in order to prevent the instances of the class to be constructed in a specific manner. A typical example is a class that is not supposed to be copyable. The classical pattern for this is to provide a default constructor and hide the copy constructor and copy assignment operators. While this works, the explicitly defined default constructor makes the class to no longer be considered trivial and therefore a POD type (that can be constructed with reinterpret_cast). The modern alternative to this is using deleted function as shown in the previous section. When the compiler encounters the =default in the definition of a function it will provide the default implementation. The rules for special member functions mentioned earlier still apply. Functions can be declared =default outside the body of a class if and only if they are inlined. class foo     {      public:      foo() = default;      inline foo& operator=(foo const &);     };     inline foo& foo::operator=(foo const &) = default;     When the compiler encounters the =delete in the definition of a function it will prevent the calling of the function. However, the function is still considered during overload resolution and only if the deleted function is the best match the compiler generates an error. For example, giving the previously defined overloads for function run() only calls with long integers are possible. Calls with arguments of any other type, including int, for which an automatic type promotion to long exists, would determine a deleted overload to be considered the best match and therefore the compiler will generate an error: run(42); // error, matches a deleted overload     run(42L); // OK, long integer arguments are allowed     Note that previously declared functions cannot be deleted, as the =delete definition must be the first declaration in a translation unit: void forward_declared_function();     // ...     void forward_declared_function() = delete; // error     The rule of thumb (also known as The Rule of Five) for class special member functions is: if you explicitly define any of copy constructor, move constructor, copy assignment, move assignment or destructor then you must either explicitly define or default all of them. Using lambdas with standard algorithms One of the most important modern features of C++ is lambda expressions, also referred as lambda functions or simply lambdas. Lambda expressions enable us to define anonymous function objects that can capture variables in the scope and be invoked or passed as arguments to functions. Lambdas are useful for many purposes and in this recipe, we will see how to use them with standard algorithms. Getting ready In this recipe, we discuss standard algorithms that take an argument that is a function or predicate that is applied to the elements it iterates through. You need to know what unary and binary functions are, and what are predicates and comparison functions. You also need to be familiar with function objects because lambda expressions are syntactic sugar for function objects. How to do it... Prefer to use lambda expressions to pass callbacks to standard algorithms instead of functions or function objects: Define anonymous lambda expressions in the place of the call if you only need to use the lambda in a single place. auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](int const n) {return n > 0; }); Define a named lambda, that is, assigned to a variable (usually with the auto specifier for the type), if you need to call the lambda in multiple places. auto ispositive = [](int const n) {return n > 0; }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), ispositive); Use generic lambda expressions if you need lambdas that only differ in their argument types (available since C++14). auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](auto const n) {return n > 0; }); How it works... The non-generic lambda expression shown above takes a constant integer and returns true if it is greater than 0, or false otherwise. The compiler defines an unnamed function object with the call operator having the signature of the lambda expression. struct __lambda_name__     {     bool operator()(int const n) const { return n > 0; }     };     The way the unnamed function object is defined by the compiler depends on the way we define the lambda expression, that can capture variables, use the mutable specifier or exception specifications or may have a trailing return type. The __lambda_name__ function object shown earlier is actually a simplification of what the compiler generates because it also defines a default copy and move constructor, a default destructor, and a deleted assignment operator. It must be well understood that the lambda expression is actually a class. In order to call it, the compiler needs to instantiate an object of the class. The object instantiated from a lambda expression is called a lambda closure. In the next example, we want to count the number of elements in a range that are greater or equal to 5 and less or equal than 10. The lambda expression, in this case, will look like this: auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 };     auto start{ 5 };     auto end{ 10 };     auto inrange = std::count_if(      std::begin(numbers), std::end(numbers),      [start,end](int const n) {return start <= n && n <= end;});     This lambda captures two variables, start and end, by copy (that is, value). The result unnamed function object created by the compiler looks very much like the one we defined above. With the default and deleted special members mentioned earlier, the class looks like this: class __lambda_name_2__     {    int start_; int end_; public: explicit __lambda_name_2__(int const start, int const end) : start_(start), end_(end) {}    __lambda_name_2__(const __lambda_name_2__&) = default;    __lambda_name_2__(__lambda_name_2__&&) = default;    __lambda_name_2__& operator=(const __lambda_name_2__&)     = delete;    ~__lambda_name_2__() = default;      bool operator() (int const n) const    { return start_ <= n && n <= end_; }     };     The lambda expression can capture variables by copy (or value) or by reference, and different combinations of the two are possible. However, it is not possible to capture a variable multiple times and it is only possible to have & or = at the beginning of the capture list. A lambda can only capture variables from an enclosing function scope. It cannot capture variables with static storage duration (that means variables declared in namespace scope or with the static or external specifier). The following table shows various combinations for the lambda captures semantics. Lambda Description [](){} Does not capture anything [&](){} Captures everything by reference [=](){} Captures everything by copy [&x](){} Capture only x by reference [x](){} Capture only x by copy [&x...](){} Capture pack extension x by reference [x...](){} Capture pack extension x by copy [&, x](){} Captures everything by reference except for x that is captured by copy [=, &x](){} Captures everything by copy except for x that is captured by reference [&, this](){} Captures everything by reference except for pointer this that is captured by copy (this is always captured by copy) [x, x](){} Error, x is captured twice [&, &x](){} Error, everything is captured by reference, cannot specify again to capture x by reference [=, =x](){} Error, everything is captured by copy, cannot specify again to capture x by copy [&this](){} Error, pointer this is always captured by copy [&, =](){} Error, cannot capture everything both by copy and by reference The general form of a lambda expression, as of C++17, looks like this:  [capture-list](params) mutable constexpr exception attr -> ret { body }    All parts shown in this syntax are actually optional except for the capture list, that can, however, be empty, and the body, that can also be empty. The parameter list can actually be omitted if no parameters are needed. The return type does not need to be specified as the compiler can infer it from the type of the returned expression. The mutable specifier (that tells the compiler the lambda can actually modify variables captured by copy), the constexpr specifier (that tells the compiler to generate a constexpr call operator) and the exception specifiers and attributes are all optional. The simplest possible lambda expression is []{}, though it is often written as [](){}. There's more... There are cases when lambda expressions only differ in the type of their arguments. In this case, the lambdas can be written in a generic way, just like templates, but using the auto specifier for the type parameters (no template syntax is involved). Summary Functions are a fundamental concept in programming; regardless the topic we discussed we end up writing functions. This article contains recipes related to functions. This article, however, covers modern language features related to functions and callable objects. Resources for Article: Further resources on this subject: Understanding the Dependencies of a C++ Application [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 12317
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-smart-contract-deployment-platform
Packt
15 Jun 2017
21 min read
Save for later

Building a Smart Contract Deployment Platform

Packt
15 Jun 2017
21 min read
In this article by Narayan Prusty, author of the book Building Blockchain Projects, we learn how to compile smart contracts using web3.js and deploy it using web3.js and EthereumJS.Some clients may need to compile and deploy contracts at runtime. In our proof-of-ownership DApp, we deployed the smart contract manually and hardcoded the contract address in the client-side code. But some clients may need to deploy smart contracts atruntime.For example, if a client lets schools record students' attendance in the blockchain, then it will need to deploy a smart contract every time a new school is registered so that each school has complete control over their smart contract. In this article, we'll cover the following topics: Calculating the nonce of a transaction Using the transaction pool JSON-RPC API Generating data of a transaction for contract creation and method invocation Estimating the gas required by a transaction Findingthe current spendablebalance of an account Compiling smart contracts using solcjs Developing a platform to write, compile, and deploy smart contracts (For more resources related to this topic, see here.) Calculating a transaction's nonce For the accounts maintained by geth, we don't need to worry about the transaction nonce because geth can add the correct nonce to the transactions and sign them. While using accounts that aren't managed by geth, we need to calculate the nonce ourselves. To calculate the nonce ourselves, we can use the getTransactionCount method provided by geth. The first argument should be the address whose transaction count we need and the second argument is the block till untilwe need the transaction count. We can provide the"pending" string as the block to include transactions from the block that's currently being mined.To mine a block, gethtakes the pending transactions from the transactions pool and starts mining the new block. Untilthe block is not mined, the pending transactions remain in the transaction pool and once mined, the mined transactions are removed from the transaction pools. The new incoming transactions received while a block is being mined are put in the transaction pool and are mined in the next block. So when we provide "pending" as the second argument while calling getTransactionCount, it doesn't look inside in the transaction pool; instead, it just considers the transactions in the pending block. So if you are trying to send transactions from accounts not managed by geth, then countthe total number of transactions of the account in the blockchain and add it with the transactions pending in the transactions pool. If you try to use pending transactions from the pending block, then you will fail to get the correct nonce if transactions are sent to geth within a few seconds of the interval because it takes 12 seconds on average to include a transaction in the blockchain. Introducingsolcjs Solcjs is a node.js library and command-line tool thatis used to compile solidity files.Itdoesn't use the solc command-line compiler; instead, it compiles purely using JavaScript, so it's much easier to install than solc. Solc is the actual solidity compiler. Solc is written in C++. The C++ code is compiled to JavaScript using emscripten. Every version of solc is compiled to JavaScript. Athttps://github.com/ethereum/solc-bin/tree/gh-pages/bin, you can find the JavaScript-based compilers of each solidity version. Solcjs just uses one of these JavaScript-basedcompilers to compile the solidity source code.These JavaScript-based compilers can run in both browser and NodeJS environment. Browser solidity uses these JavaScript-based compilers to compile the solidity source code. Installing solcjs Solcjs is available as an npm package with the name solc. You can install thesolcjs npm package locally or globally just like any other npm package. If this package is installed globally, then solcjs, a command-line tool, will be available. So, in order to install the command-line tool, run this command: npm install -g solc Now go ahead and run this command to see how to compile solidity files using the command-line compiler: solcjs –help We won't be exploring the solcjs command-line tool; instead, we will learn about the solcjs APIs to compile solidity files. By default, solcjs uses compiler version matching as its version. For example,if you install solcjs version 0.4.8, then it will use the 0.4.8 compiler version to compileby default. Solcjs can be configured to use some other compiler version too. At the time of writing this, the latest version of solcjs is0.4.8. Solcjs APIs Solcjs provides a compilermethod, which is used to compile solidity code. This method can be used in two different ways depending on whether the source code has any imports or not. If the source code doesn't have any imports, then it takes two arguments;that is,the first argument is solidity source code as a string and a Boolean indicating whether to optimize the byte code or not. If the source string contains multiple contracts, then it will compile all of them. Here is an example to demonstrate this: var solc = require("solc"); var input = "contract x { function g() {} }"; var output = solc.compile(input, 1); // 1 activates the optimiser for (var contractName in output.contracts) { // logging code and ABI console.log(contractName + ": " + output.contracts[contractName].bytecode); console.log(contractName + "; " + JSON.parse(output.contracts[contractName].interface)); } If your source code contains imports, then the first argument will be an object whose keysare filenames and valuesare the contents of the files. So whenever the compiler sees an import statement, it doesn't look for the file in the filesystem; instead, it looks for the file contents in the object by matching the filename with the keys.Here is an example to demonstrate this: var solc = require("solc"); var input = { "lib.sol": "library L { function f() returns (uint) { return 7; } }", "cont.sol": "import 'lib.sol'; contract x { function g() { L.f(); } }" }; var output = solc.compile({sources: input}, 1); for (var contractName in output.contracts) console.log(contractName + ": " + output.contracts[contractName].bytecode); If you want to read the imported file contents from the filesystem during compilation or resolve the file contents during compilation, then the compiler method supports a third argument, which is a method that takes the filename and should return the file content. Here is an example to demonstrate this: var solc = require("solc"); var input = { "cont.sol": "import 'lib.sol'; contract x { function g() { L.f(); } }" }; function findImports(path) { if (path === "lib.sol") return { contents: "library L { function f() returns (uint) { return 7; } }" } else return { error: "File not found" } } var output = solc.compile({sources: input}, 1, findImports); for (var contractName in output.contracts) console.log(contractName + ": " + output.contracts[contractName].bytecode); Using a different compiler version In order to compile contracts using a different version of solidity, you need to use theuseVersion method to get a reference of a different compiler. useVersion takes a string thatindicates the JavaScript filename that holds the compiler, and it looks for the file in the /node_modules/solc/bindirectory. Solcjs also provides another method called loadRemoteVersion, which takes the compiler filename that matches the filename in the solc-bin/bin directory of the solc-binrepository (https://github.com/ethereum/solc-bin)and downloads and uses it. Finally, solcjs also provides another method called setupMethods,which is similar to useVersion but can load the compiler from any directory. Here is an example to demonstrate all three methods: var solc = require("solc"); var solcV047 = solc.useVersion("v0.4.7.commit.822622cf"); var output = solcV011.compile("contract t { function g() {} }", 1); solc.loadRemoteVersion('soljson-v0.4.5.commit.b318366e', function(err, solcV045) { if (err) { // An error was encountered, display and quit } var output = solcV045.compile("contract t { function g() {} }", 1); }); var solcV048 = solc.setupMethods(require("/my/local/0.4.8.js")); var output = solcV048.compile("contract t { function g() {} }", 1); solc.loadRemoteVersion('latest', function(err, latestVersion) { if (err) { // An error was encountered, display and quit } var output = latestVersion.compile("contract t { function g() {} }", 1); }); To run the precedingcode, you need to first download thev0.4.7.commit.822622cf.js file from thesolc-bin repository and place it in thenode_modules/solc/bin directory. And then, you need to download the compiler file of solidity version 0.4.8 and place it somewherein the filesystem and point the path in the setupMethods call to that directory. Linking libraries If your solidity source code references libraries, then the generated byte code will contain placeholders for the real addresses of the referenced libraries. These have to be updated via a process called linking before deploying the contract. Solcjs provides thelinkByteCode method to link library addresses to the generated byte code. Here is an example to demonstrate this: var solc = require("solc"); var input = { "lib.sol": "library L { function f() returns (uint) { return 7; } }", "cont.sol": "import 'lib.sol'; contract x { function g() { L.f(); } }" }; var output = solc.compile({sources: input}, 1); var finalByteCode = solc.linkBytecode(output.contracts["x"].bytecode, { 'L': '0x123456...' }); Updating the ABI The ABI of a contract provides various kinds of information about the contract other than implementation.ABI generated by two different versions of compilers may not match as higher versions support more solidity features than lower versions; therefore, they will include extra things in the ABI.For example, thefallback function was introduced in 0.4.0 version of Solidity so the ABI generated using compilers whose version is less than 0.4.0 will have no information aboutfallback functions, and these smart contracts behave like they havea fallback function with an empty body and apayable modifier. So, the API should be updated so that applications thatdepend onthe ABI of newer solidity version can have better information about the contract. Solcjs provides an API to update ABI. Here is an example code to demonstrate this: var abi = require("solc/abi"); var inputABI = [{"constant":false,"inputs":[],"name":"hello","outputs":[{"name":"","type":"string"}],"payable":false,"type":"function"}]; var outputABI = abi.update("0.3.6", inputABI) Here, 0.3.6 indicates that the ABI was generated using 0.3.6 version of compiler. As we are using solcjs version 0.4.8, the ABI will be updated to match the ABI generated by 0.4.8 compiler version not above it. The output of the precedingcode will be as follows: [{"constant":false,"inputs":[],"name":"hello","outputs":[{"name":"","type":"string"}],"payable":true,"type":"function"},{"type":"fallback","payable":true}] Building a contract deployment platform Now that we have learned how to use solcjs to compile solidity source code,it's time to build a platform that lets us write, compile, and deploy contracts.Our platform will let users provide their account address and private key, using which our platform will deploy contracts. Before you start building the application, make sure that you are running the geth development instance, which is mining, has rpc enabled, and exposeseth, web3, and txpool APIs over the HTTP-RPC server. You can do all these by running this: geth --dev --rpc --rpccorsdomain "*" --rpcaddr "0.0.0.0" --rpcport "8545" --mine --rpcapi "eth,txpool,web3" Building the backend Let's first build the backend of the app. First of all, run npm install inside the Initial directory to install the required dependencies for our backend. Here is the backend code to run an express service and serve the index.html file and static files: var express = require("express"); var app = express(); app.use(express.static("public")); app.get("/", function(req, res){ res.sendFile(__dirname + "/public/html/index.html"); }) app.listen(8080); The precedingcode is self-explainable.Now let's proceed further.Our app will have two buttons,that is, compile and deploy. When the user will click on the compile button, the contract will be compiled and when the deploy button is clicked on, the contract will be deployed. We will be compiling and deploying contracts in the backend. Although this can be done in the frontend, we will do it in the backend because solcjs is available only for NodeJS (although the JavaScript-based compilers it uses work on the frontend). To learn how to compile on the frontend, go through the source code of solcjs, which will give you an idea about the APIs exposed by the JavaScript-based compiler. When the user clickson the compile button, the frontend will make a GET request to the /compile path by passing the contract source code. Here is the code for the route: var solc = require("solc"); app.get("/compile", function(req, res){ var output = solc.compile(req.query.code, 1); res.send(output); }) At first, we import the solcjslibrary here. Then, we define the /compile route and inside the route callback, we simply compile the source code sent by the client with the optimizer enabled.And then we just send the solc.compile method's return value to the frontend and let the client checkwhether the compilation was successful or not. When the user clicks on the deploy button, the frontend will make a GET request to the /deploy path by passing the contract source code and constructor arguments from the address and private key.When the user clicks on this button, the contract will be deployed and the transaction hash will be returned to the user. Here is the code for this: var Web3 = require("web3"); var BigNumber = require("bignumber.js"); var ethereumjsUtil = require("ethereumjs-util"); var ethereumjsTx = require("ethereumjs-tx"); var web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545")); function etherSpentInPendingTransactions(address, callback) { web3.currentProvider.sendAsync({ method: "txpool_content", params: [], jsonrpc: "2.0", id: new Date().getTime() }, function (error, result) { if(result.result.pending) { if(result.result.pending[address]) { var txns = result.result.pending[address]; var cost = new BigNumber(0); for(var txn in txns) { cost = cost.add((new BigNumber(parseInt(txns[txn].value))).add((new BigNumber(parseInt(txns[txn].gas))).mul(new BigNumber(parseInt(txns[txn].gasPrice))))); } callback(null, web3.fromWei(cost, "ether")); } else { callback(null, "0"); } } else { callback(null, "0"); } }) } function getNonce(address, callback) { web3.eth.getTransactionCount(address, function(error, result){ var txnsCount = result; web3.currentProvider.sendAsync({ method: "txpool_content", params: [], jsonrpc: "2.0", id: new Date().getTime() }, function (error, result) { if(result.result.pending) { if(result.result.pending[address]) { txnsCount = txnsCount + Object.keys(result.result.pending[address]).length; callback(null, txnsCount); } else { callback(null, txnsCount); } } else { callback(null, txnsCount); } }) }) } app.get("/deploy", function(req, res){ var code = req.query.code; var arguments = JSON.parse(req.query.arguments); var address = req.query.address; var output = solc.compile(code, 1); var contracts = output.contracts; for(var contractName in contracts) { var abi = JSON.parse(contracts[contractName].interface); var byteCode = contracts[contractName].bytecode; var contract = web3.eth.contract(abi); var data = contract.new.getData.call(null, ...arguments, { data: byteCode }); var gasRequired = web3.eth.estimateGas({ data: "0x" + data }); web3.eth.getBalance(address, function(error, balance){ var etherAvailable = web3.fromWei(balance, "ether"); etherSpentInPendingTransactions(address, function(error, balance){ etherAvailable = etherAvailable.sub(balance) if(etherAvailable.gte(web3.fromWei(new BigNumber(web3.eth.gasPrice).mul(gasRequired), "ether"))) { getNonce(address, function(error, nonce){ var rawTx = { gasPrice: web3.toHex(web3.eth.gasPrice), gasLimit: web3.toHex(gasRequired), from: address, nonce: web3.toHex(nonce), data: "0x" + data }; var privateKey = ethereumjsUtil.toBuffer(req.query.key, 'hex'); var tx = new ethereumjsTx(rawTx); tx.sign(privateKey); web3.eth.sendRawTransaction("0x" + tx.serialize().toString('hex'), function(err, hash) { res.send({result: { hash: hash, }}); }); }) } else { res.send({error: "Insufficient Balance"}); } }) }) break; } }) This is how the preceding code works: At first, the Web imported the web3.js, BigNumber.js, ethereumjs-util, and ethereumjs-tx libraries. Then, we created an instance of Web3. Then, we defined a function named etherInSpentPendingTransactions,which calculatesthe total ether that's being spent in the pending transactions of an address. As web3.js doesn't provide JavaScript APIs related to the transaction pool, we make a raw JSON-RPC call using web3.currentProvider.sendAsync.sendAsync is used to make raw JSON-RPC calls asynchronously. If you want to make this call synchronously, then use thesend method instead of sendAsync. While calculating the total ether in the pending transactions of an address, we look for pending transactions in the transactions pool instead of the pending block due to the issue we discussed earlier. While calculating the total ether, we add the value and gas of each transaction as gas also deducted ether balance. Next, we defined a function called getNonce, which retrieves the nonce of an address using the technique we discussed earlier. It simply adds the total number of mined transactions to the total number of pending transactions. Finally, we declared the /deploy endpoint.At first, we compile the contract. Then, we deployonly the first contract. Our platform is designed to deploy the firstcontractif multiple contracts are found in the provided source code. You can later enhance the app to deploy all the compiled contracts instead of just the first one.Then, we created a contract object using web3.eth.contract. As we aren't using hooked-web3-provider or any hack to intercept sendTransactions and convert them into thesendRawTransaction call,in order to deploy the contract, we now need to generate the data part of the transaction, which will have the contract byte code and constructor arguments combined and encoded as a hexadecimal string. The contract object actually lets us generate the data of the transaction. This can be done by calling thegetData method with function arguments. If you want to get data todeploy the contract, then call contract.new.getData, and if you want to call a function of the contract, then call contract.functionName.getData. In both the cases, provide the arguments to the getData method. So, in order to generate the data of a transaction, you just need the contract's ABI. To learn how the function name and arguments are combined and encoded to generate data, you can check outhttps://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI#examples,but this won't be required if you have the ABI of the contract or know how to create ABI manually. Then, we useweb3.eth.estimateGas to calculate the amount of gas that would be required to deploy the contract. Later, we check whether the address has enough ether to pay for the gas required to deploy the contract. We find this out by retrieving the balance of the address and subtracting it with the balance spent in the pending transactions and then checking whetherthe remaining balance is greater thanor equal to the amount of ether required for the gas. And finally, we get the nonce, signing and broadcasting the transactions. We simply return the transaction hash to the frontend. Building the frontend Now let's build the frontend of our application. Our frontend will contain an editor, using which the user writes code. And when the user clicks on the compile button, we will dynamically display input boxes where each input box will represent a constructor argument. When the deploy button is clicked on, the constructor argument value are taken from these input boxes. The user will need to enter the JSON string in these input boxes. We will be using the codemirror library to integrate the editor in our frontend. To learn more about how to use codemirror, refer to http://codemirror.net/. Here is the frontend HTML code of our app. Place this code in the index.html file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <link rel="stylesheet" href="/css/bootstrap.min.css"> <link rel="stylesheet" href="/css/codemirror.css"> <style type="text/css"> .CodeMirror { height: auto; } </style> </head> <body> <div class="container"> <div class="row"> <div class="col-md-6"> <br> <textarea id="editor"></textarea> <br> <span id="errors"></span> <button type="button" id="compile" class="btn btn-primary">Compile</button> </div> <div class="col-md-6"> <br> <form> <div class="form-group"> <label for="address">Address</label> <input type="text" class="form-control" id="address" placeholder="Prefixed with 0x"> </div> <div class="form-group"> <label for="key">Private Key</label> <input type="text" class="form-control" id="key" placeholder="Prefixed with 0x"> </div> <hr> <div id="arguments"></div> <hr> <button type="button" id="deploy" class="btn btn-primary">Deploy</button> </form> </div> </div> </div> <script src="/js/codemirror.js"></script> <script src="/js/main.js"></script> </body> </html> Here, you can see that we have a textarea. The textareatag will hold whatever user will enter in the codemirror editor. Everything else in the precedingcode is self-explanatory. Here is the complete frontend JavaScript code. Place this code in the main.js file: var editor = CodeMirror.fromTextArea(document.getElementById("editor"), { lineNumbers: true, }); var argumentsCount = 0; document.getElementById("compile").addEventListener("click", function(){ editor.save(); var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { if(JSON.parse(xhttp.responseText).errors != undefined) { document.getElementById("errors").innerHTML = JSON.parse(xhttp.responseText).errors + "<br><br>"; } else { document.getElementById("errors").innerHTML = ""; } var contracts = JSON.parse(xhttp.responseText).contracts; for(var contractName in contracts) { var abi = JSON.parse(contracts[contractName].interface); document.getElementById("arguments").innerHTML = ""; for(var count1 = 0; count1 < abi.length; count1++) { if(abi[count1].type == "constructor") { argumentsCount = abi[count1].inputs.length; document.getElementById("arguments").innerHTML = '<label>Arguments</label>'; for(var count2 = 0; count2 < abi[count1].inputs.length; count2++) { var inputElement = document.createElement("input"); inputElement.setAttribute("type", "text"); inputElement.setAttribute("class", "form-control"); inputElement.setAttribute("placeholder", abi[count1].inputs[count2].type); inputElement.setAttribute("id", "arguments-" + (count2 + 1)); var br = document.createElement("br"); document.getElementById("arguments").appendChild(br); document.getElementById("arguments").appendChild(inputElement); } break; } } break; } } }; xhttp.open("GET", "/compile?code=" + encodeURIComponent(document.getElementById("editor").value), true); xhttp.send(); }) document.getElementById("deploy").addEventListener("click", function(){ editor.save(); var arguments = []; for(var count = 1; count <= argumentsCount; count++) { arguments[count - 1] = JSON.parse(document.getElementById("arguments-" + count).value); } var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { var res = JSON.parse(xhttp.responseText); if(res.error) { alert("Error: " + res.error) } else { alert("Txn Hash: " + res.result.hash); } } else if(this.readyState == 4) { alert("An error occured."); } }; xhttp.open("GET", "/deploy?code=" + encodeURIComponent(document.getElementById("editor").value) + "&arguments=" + encodeURIComponent(JSON.stringify(arguments)) + "&address=" + document.getElementById("address").value + "&key=" + document.getElementById("key").value, true); xhttp.send(); }) Here is how the precedingcode works: At first, we add the code editor to the webpage. The code editor will be displayed in place of textarea and textarea will be hidden. Then we have the compile button's click event handler. Inside it, we save the editor, which copies the content of the editor to textarea.When the compile button is clicked on, we make a request to the /compile path, and once we get the result, we parse it and display the input boxes so that the user can enter the constructor arguments. Here, we only read the constructor arguments for the first contract. But you can enhance the UI to display input boxes for constructors of all the contracts if there are more than one. And finally, we have the deploy button's click event handler.Here,we read the constructor arguments' value, parsing and putting them in an array. And then we adda request to the /deploy endpoint by passing the address, key, code, and argument value.If there is an error, then we display that in a popup; otherwise, we display the transaction hash in the popup. Testing To test the app, run the app.jsnode inside the Initial directory and visit localhost:8080. You will see what is shown in the following screenshot: Now enter some solidity contract code and press the compile button. Then, you will be able to see new input boxes appearing on the right-hand side. For example,take a look at the following screenshot: Now enter a valid address and its associated private key. And then enter values for the constructor arguments and click on deploy. If everything goes right, then you will see an alert box with transaction hash. For example, take a look at the following screenshot: Summary In this article, we learned how to use the transaction pool API, how to calculate proper nonce, calculate spendable balance, generate data of a transaction, compile contracts, and so on. We then built a complete contract compilation and deployment platform. Now you can go ahead and enhance the application we have built to deploy all the contracts found in the editor, handle imports, add libraries, and so on. Resources for Article: Further resources on this subject: What is D3.js? [article] Making a Web Server in Node.js [article] An Introduction to Node.js Design Patterns [article]
Read more
  • 0
  • 0
  • 3861

article-image-creating-sprites
Packt
15 Jun 2017
6 min read
Save for later

Creating Sprites

Packt
15 Jun 2017
6 min read
In this article by Nikhil Malankar, author of Learning Android Game Development, We have almost learnt everything about the basics that we need to create various components in Android and so we can now move on to do some more exciting stuff. Now at this point we will start working on a proper 2D game. It will be a small 2D side scroller game like Mario. But before we do that, lets first talk about games as a development concept. In order to understand more about games, you will need to understand a bit of Game Theory. So before we proceed with creating images and backgrounds on screen lets dive into some game theory. Here are a list of topics we will be covering in this article: Game Theory Working with colors Creating images on screen Making a continuous scrolling background (For more resources related to this topic, see here.) Lets start with the first one. Game Theory If you observe a game carefully in its source code level, you will observe that a game is just a set of illusions to create certain effects and display them on screen. Perhaps, the best example of this can be the game that we are about to develop itself. In order to make your character move ahead you can do either of the two things: Make the character move ahead Make the background move behind Illusions Either of the two things mentioned above will give you an illusion that the character is moving in a certain direction. Also, if you remember Mario properly then you will notice that the clouds and grasses are one and the same. Only their colors were changed. This was because of the memory limitations of the console platform at the time: Image Source: http://hub.packtpub.com/wp-content/uploads/2017/06/mario-clouds-is-grass.jpg Game developers use many such cheats in order to get their game running. Of course, in today's times we don't have to worry much about memory limitations because our mobile device today has the capability of the Apollo 11 rocket which landed on the Moon. Now, keep in mind the above two scenarios, we are going to use one of them in our game to make our character move. We also have to understand that every game is a loop of activities. Unlike an app, you need to draw your game resources every frame per second which will give you the illusion of moving or any effect on screen. This concept is called as Frames Per Second(FPS). Its almost similar to that of the concept of old films where a huge film used to be projected on screen by rolling per frame. Take a look at the following image to understand this concept better: Sprite Sheet of a game character As you can see above, a sprite sheet is simply an image consisting of multiple images within themselves in order to create an animation and thereby a sprite is simply an image. If we want to make our character run we will simply read the file Run_000 and play it all the way sequentially through to Run_009 which will make it appear as though the character is running. Majority of the things that you will be working with when making a game would be based on manipulating your movements. And so, you will need to be clear about your coordinates system because it will come in handy. Be it for firing a bullet out of a gun, character movement or simply turning around to look here and there. All of it is based on the simple component of movement. Game Loop In its core, every game is basically just a loop of events. It is a set up to give calls to various functions and code blocks to execute in order to have draw calls on your screen and thereby making the game playable. Mostly, your game loop comprises of 3 parts: Initialize Update Draw Initializing the game means to set an entry point to your game through which the other two parts can be called. Once your game is initialized you need to start giving calls to your events which can be managed through your update function. The draw function is responsible for drawing all your image data on screen. Everything you see on the screen including your backgrounds, images or even your GUI is the responsibility of the draw method. To say the least, your game loop is the heart of your game. This is just a basic overview of the game loop and there is much more complexity you can add to it. But for now, this much information is sufficient for you to get started. The following image perfectly illustrates what a game loop is: Image Source: https://gamedevelopment.tutsplus.com/articles/gamedev-glossary-what-is-the-game-loop--gamedev-2469 Game Design Document Also, before starting a game it is very essential to create a Game Design Document(GDD). This document serves as a groundwork for the game you will be making. 99% of times when we start making a game we lose track of the features planned for it and deviate from the core game experience. So, it is always recommended to have a GDD in place in order to keep focus. A GDD consists of the following things: Game play mechanics Story (if any) Level design Sound and music UI planning and game controls You can read more about the Game Design Document by going to the following link: https://en.wikipedia.org/wiki/Game_design_document Prototyping When making a game we need to test it simultaneously. A game is one of the most complex pieces of software and if we mess up on one part there is a chance that it might break the entire game as a whole. This process can be called as Prototyping. Making a prototype of your game is one of THE most important aspects of a game because this is where you test out the basic mechanics of your game. A prototype should be a simple working model of your game with basic functionality. It can also be termed as a stripped down version of your game. Summary Congratulations! You have successfully learnt how to create images and work with colors in Android Studio. Resources for Article: Further resources on this subject: Android Game Development with Unity3D [article] Optimizing Games for Android [article] Drawing and Drawables in Android Canvas [article]
Read more
  • 0
  • 0
  • 39440

article-image-your-first-unity-project
Packt
15 Jun 2017
11 min read
Save for later

Your first Unity project

Packt
15 Jun 2017
11 min read
In this article by Tommaso Lintrami, the author of the book Unity 2017 Game Development Essentials - Third Edition, we will see that when starting out in game development, one of the best ways to learn the various parts of the discipline is to prototype your idea. Unity excels in assisting you with this, with its visual scene editor and public member variables that form settings in the Inspector. To get to grips with working in the Unity editor. In this article, you will learn about: Creating a New Project in Unity Working with GameObjects in the SceneView and Hierarchys (For more resources related to this topic, see here.) As Unity comes in two main forms—a standard, free download and a paid Pro developer license.We'll stick to using features that users of the standard free edition will have access to. If you're launching Unity for the very first time, you'll be presented with a Unitydemonstration project. While this is useful to look into the best practices for the development of high-end projects, if you're starting out, looking over some of the assets and scripting may feel daunting, so we'll leave this behind and start from scratch! Take a look at the following steps for setting up your Unity project: In Unity go to File | NewProject and you will be presented with the ProjectWizard.The following screenshot is a Mac version shown: From here select the NEW tab and 3D type of project. Be aware that if at any time you wish to launch Unity and be taken directly to the ProjectWizard, then simply launch the Unity Editor application and immediately hold the Alt key (Mac and Windows). This can be set to the default behavior for launch in the Unity preferences. Click the Set button and choose where you would like to save your new Unity project folder on your hard drive.  The new project has been named UGDE after this book, and chosen to store it on my desktop for easy access. The Project Wizard also offers the ability to import many Assetpackages into your new project which are provided free to use in your game development by Unity Technologies. Comprising scripts, ready-made objects, and other artwork, these packages are a useful way to get started in various types of new project. You can also import these packages at any time from the Assets menu within Unity, by selecting ImportPackage, and choosing from the list of available packages. You can also import a package from anywhere on your hard drive by choosing the CustomPackage option here. This import method is also used to share assets with others, and when receiving assets you have downloaded through the AssetStore—see Window | Asset Store to view this part of Unity later. From the list of packages to be imported, select the following (as shown in the previous image):     Characters     Cameras     Effects     TerrainAssets     Environment When you are happy with your selection, simply choose Create Project at the bottom of this dialog window. Unity will then create your new project and you will see progress bars representing the import of the four packages. A basic prototyping environment To create a simple environment to prototype some game mechanics, we'll begin with a basic series of objects with which we can introduce gameplay that allows the player to aim and shoot at a wall of primitive cubes. When complete, your prototyping environment will feature a floor comprised of a cube primitive, a main camera through which to view the 3D world, and a point light setup to highlight the area where our gameplay will be introduced. It will look like something as shown in the following screenshot: Setting the scene As all new scenes come with a Main Camera object by default, we'll begin by adding a floor for our prototyping environment. On the Hierarchy panel, click the Create button, and from the drop-down menu, choose Cube. The items listed in this drop-down menu can also be found in the GameObject | CreateOther top menu. You will now see an object in the Hierarchy panel called Cube. Select this and press Return (Mac)/F2 (Windows) or double-click the object name slowly (both platforms) to rename this object, type in Floor and press Return (both platforms) to confirm this change. For consistency's sake, we will begin our creation at world zero—the center of the 3D environment we are working in. To ensure that the floor cube you just added is at this position, ensure it is still selected in the Hierarchypanel and then check the Transform component on the Inspector panel, ensuring that the position values for X, Y, and Z are all at 0, if not, change them all to zero either by typing them in or by clicking the cog icon to the right of the component, and selecting ResetPosition from the pop-out menu. Next, we'll turn the cube into a floor, by stretching it out in the X and Z axes. Into the X and Z values under Scale in the Transform component, type a value of 100, leaving Y at a value of 1. Adding simple lighting Now we will highlight part of our prototyping floor by adding a point light. Select the Create button on the Hierarchypanel(or go to Game Object | Create Other) and choose point light. Position the new point light at (0,20,0) using the Position values in the Transform component, so that it is 20 units above the floor. You will notice that this means that the floor is out of range of the light, so expand the range by dragging on the yellow dot handles that intersect the outline of the point light in the SceneView, until the value for range shown in the Light component in the Inspector reaches something around a value of 40, and the light is creating a lit part of the floor object. Bear in mind that most components and visual editing tools in the SceneView are inextricably linked, so altering values such asRangein the Inspector Light component will update the visual display in the SceneView as you type, and stay constant as soon as you pressReturnto confirm the values entered. Another brick in the wall Now let's make a wall of cubes that we can launch a projectile at. We'll do this by creating a single master brick, adding components as necessary, and then duplicating this until our wall is complete. Building the master brick In order to create a template for all of our bricks, we'll start by creating a master object, something to create clones of. This is done as follows: Click the Create button at the top of the Hierarchy, and select Cube. Position this at (0,1,0) using the Position values in the Transform component on the Inspector. Then, focus your view on this object by ensuring it is still selectedin the Hierarchy, by hovering your cursor over the SceneView, and pressing F. Add physics to your Cube object by choosing Component | Physics | Rigidbody from the top menu. This means that your object is now a Rigidbody—it has mass, gravity, and is affected by other objects using the physics engine for realistic reactions in the 3D world. Finally, we'll color this object by creating amaterial. Materials are a way of applying color and imagery to our 3D geometry. To make a new one, go to the Create button on the Project panel and choose Material from the drop-down menu. Press Return (Mac) or F2 (Windows) to rename this asset to Red instead of the default name New Material. You can also right-click in the Materials Project folder and Create| Material or alternatively you can use the editor main menu: Assets | Create | Material With this material selected, the Inspector shows its properties. Click on the color block to the right of MainColor [see image label 1] to open the Color Picker[see image label 2]. This will differ in appearance depending upon whether you are using Mac or Windows. Simply choose a shade of red, and then close the window. The Main Color block should now have been updated. To apply this material, drag it from the Project panel and drop it onto either the cube as seen in the SceneView, or onto the name of the object in the Hierarchy. The material is then applied to the Mesh Renderer component of this object and immediately seen following the other components of the object in the Inspector. Most importantly, your cube should now be red! Adjusting settings using the preview of this material on any object will edit the original asset, as this preview is simply a link to the asset itself, not a newly editable instance. Now that our cube has a color and physics applied through the Rigid body component, it is ready to be duplicated and act as one brick in a wall of many. However, before we do that, let’s have a quick look at the physics in action. With the cube still selected, set the Yposition value to 15 and the Xrotation value to 40 in the Transform component in the Inspector. Press Play at the top of the Unity interface and you should see the cube fall and then settle, having fallen at an angle. The shortcut for Play is Ctrl+Pfor Windows andCommand+Pfor Mac. Press Play again to stop testing. Do not press Pause as this will only temporarily halt the test, and changes made thereafter to the scene will not be saved. Set the Yposition value for the cube back to 1, and set the X Rotation back to 0. Now that we know our brick behaves correctly, let's start creating a row of bricks to form our wall. And snap!—It's a row To help you position objects, Unity allows you to snap to specific increments when dragging—these increments can be redefined by going to Edit | Snap Settings. To use snapping, hold down Command (Mac) or Ctrl (Windows) when using the Translatetool (W) to move objects in theSceneView. So in order to start building thewall, duplicate the cube brick we already have using the shortcut Command+D (Mac) or Ctrl+D (PC), then drag the red axis handle while holding the snapping key. This will snap one unit at a time by default, so snap-move your cube one unit in the X axis so that it sits next to the original cube, shown as follows: Repeat this procedure of duplication and snap-dragging until you have a row of 10 cubes in a line. This is the first row of bricks, and to simplify building the rest of the bricks we will now group this row under an empty object, and then duplicate the parent empty object. Vertex snapping The basic snapping technique used here works well as our cubes are a generic scale of 1, but when scaling more detailed shaped objects, you should use vertex snapping instead. To do this, ensure that the Translate tool is selected and hold down V on the keyboard.Now hover your cursor over a vertex point on your selected object and drag to any other vertex of another object to snap to it. Grouping and duplicating with empty objects Create an empty object by choosing GameObject | Create Empty from the top menu, then position this at (4.5,0.5,-1) using the Transform component in the Inspector. Rename this from the default nameGameObjecttoCubeHolder. Now select all of the cube objects in the Hierarchy by selecting the top one, holding the Shift key, and then selecting the last. Now drag this list of cubes in the Hierarchy onto the empty object named CubeHolder in the Hierarchy in order to make this their parent object.The Hierarchy should now look like this: You'll notice that the parent empty object now has an arrow to the left of its object title, meaning you can expand and collapse it. To save space in the Hierarchy, click the arrow now to hide all of the child objects, and then re-select the CubeHolder. Now that we have a complete row made and parented, we can simply duplicate the parent object, and use snap-dragging to lift a whole new row up in the Y axis. Use the duplicate shortcut (Command/Ctrl + D) as before, then select the Translate tool (W) and use the snap-drag technique (hold command on Mac, Ctrl on PC) outlined earlier to lift by 1 unit in the Y axis by pulling the green axis handle. Repeat this procedure to create eight rows of bricks in all, one on top of the other. It should look something like the following screenshot. Note that in the image all CubeHolderrow objects are selected in the Hierarchy. Summary In this article, you should have become familiar with the basics of using the Unity interface, working with GameObjects. Resources for Article: Further resources on this subject: Components in Unity [article] Component-based approach of Unity [article] Using Specular in Unity [article]
Read more
  • 0
  • 0
  • 32538

article-image-wordpress-web-application-framework
Packt
15 Jun 2017
20 min read
Save for later

WordPress as a Web Application Framework

Packt
15 Jun 2017
20 min read
In this article written by Rakhitha Ratanayake, author of the book Wordpress Web Application Development - Third Edition you will learn that WordPress has matured from the most popular blogging platform to the most popular content management system. Thousands of developers around the world are making a living from WordPress design and development. As more and more people are interested in using WordPress, the dream of using this amazing framework for web application development is becoming possible. The future seems bright as WordPress hasalready got dozens of built-in features, which can be easily adapted to web application development using slight modifications. Since you are already reading this article, you have to be someone who is really excited to see how WordPress fits into web application development. Throughout this article, we will learn how we can inject the best practices of web development into WordPress framework to build web applications in rapid process.Basically, this article will be important for developers from two different perspectives. On one hand, beginner- to intermediate-level WordPress developers can get knowledge of cutting-edge web development technologies and techniques to build complex applications. On the other hand, web development experts who are already familiar with popular PHP frameworks can learn WordPress for rapid application development. So, let's get started! In this article, we will cover the following topics: WordPress as a CMS WordPress as a web application framework Simplifying development with built-in features Identifying the components of WordPress Making a development plan for forum management application Understanding limitations and sticking with guidelines Building a question-answer interface Enhancing features of the questions plugin (For more resources related to this topic, see here.) In order to work with this article, you should be familiar with WordPress themes, plugins, and its overall process. Developers who are experienced in PHP frameworks can work with this article while using the reference sources to learn WordPress. By the end of this article, you will have the ability to make the decision to choose WordPress for web development. WordPress as a CMS Way back in 2003, WordPress released its first version as a simple blogging platform and continued to improve until it became the most popular blogging tool. Later, it continued to improve as a CMS and now has a reputation for being the most popular CMS for over 5 years. These days everyone sees WordPress as a CMS rather than just a blogging tool. Now the question is, where will it go next? Recent versions of WordPress have included popular web development libraries such as Backbone.js and Underscore.js and developers are building different types of applications with WordPress. Also the most recent introduction of REST API is a major indication that WordPress is moving towards the direction of building web applications. The combination of REST API and modern JavaScript frameworks will enable developers to build complex web applications with WordPress. Before we consider the application development aspects of WordPress, it's ideal to figure out the reasons for it being such a popular CMS. The following are some of the reasons behind the success of WordPress as a CMS: Plugin-based architecture for adding independent features and the existence of over 40,000 open source plugins Ability to create unlimited free websites at www.wordpress.com and use the basic WordPress features A super simple and easy-to-access administration interface A fast learning curve and comprehensive documentation for beginners A rapid development process involving themes and plugins An active development community with awesome support Flexibility in building websites with its themes, plugins, widgets, and hooks Availability of large premium theme and plugin marketplaces for developers to sell advanced plugin/themes and users to build advanced sites with those premium plugins/themes without needing a developer. These reasons prove why WordPress is the top CMS for website development. However, experienced developers who work with full stack web applications don't believe that WordPress has a future in web application development. While it's up for debate, we'll see what WordPress has to offer for web development. Once you complete reading this article, you will be able to decide whether WordPress has a future in web applications. I have been working with full stack frameworks for several years, and I certainly believe the future of WordPress for web development. WordPress as a web application framework In practice, the decision to choose a development framework depends on the complexity of your application. Developers will tend to go for frameworks in most scenarios. It's important to figure out why we go with frameworks for web development. Here's a list of possible reasons why frameworks become a priority in web application development: Frameworks provide stable foundations for building custom functionalities Usually, stable frameworks have a large development community with an active support They have built-in features to address the common aspects of application development, such as routing, language support, form validation, user management, and more They have a large amount of utility functions to address repetitive tasks Full stack development frameworks such as Zend, CodeIgniter, and CakePHP adhere to the points mentioned in the preceding section, which in turn becomes the framework of choice for most developers. However, we have to keep in mind that WordPress is an application where we built applications on top of existing features. On the other hand, traditional frameworks are foundations used for building applications such as WordPress. Now, let's take a look at how WordPress fits into the boots of web application framework. The MVC versus event-driven architecture A vast majority of web development frameworks are built to work with the Model-View-Controller(MVC) architecture, where an application is separated into independent layers called models, views, and controllers. In MVC, we have a clear understanding of what goes where and when each of the layers will be integrated in the process. So, the first thing most developers will look at is the availability of MVC in WordPress. Unfortunately, WordPress is not built on top of the MVC architecture. This is one of the main reasons why developers refuse to choose it as a development framework. Even though it is not MVC, we can create custom execution process to make it work like a MVC application. Also, we can find frameworks such as WP MVC, which can be used to take advantage of both WordPress's native functionality and a vast plugin library and all of the many advantages of an MVC framework. Unlike other frameworks, it won't have the full capabilities of MVC. However, unavailability of the MVC architecture doesn't mean that we cannot develop quality applications with WordPress. There are many other ways to separate concerns in WordPress applications. WordPress on the other hand, relies on a procedural event-driven architecture with its action hooks and filters system. Once a user makes a request, these actions will get executed in a certain order to provide the response to the user. You can find the complete execution procedure at http://codex.wordpress.org/Plugin_API/Action_Reference. In the event-driven architecture, both model and controller code gets scattered throughout the theme and plugin files. Simplifying development with built-in features As we discussed in the previous section, the quality of a framework depends on its core features. The better the quality of the core, the better it will be for developing quality and maintainable applications. It's surprising to see the availability of number of WordPress features directly related to web development, even though it is meant to create websites. Let's get a brief introduction about the WordPress core features to see how it fits into web application development. User management Built-in user management features are quite advanced in order to cater to the most common requirements of any web application. Its user roles and capability handling makes it much easier to control the access to specific areas of your application. We can separate users into multiple levels using roles and then use capabilities to define the permitted functionality for each user level. Most full stack frameworks don't have a built-in user management features, and hence, this can be considered as an advantage of using WordPress. Media management File uploading and managing is a common and time consuming task in web applications. Media uploader, which comes built-in with WordPress, can be effectively used to automate the file-related tasks without writing much source code. A super-simple interface makes it so easy for application users to handle file-related tasks. Also, WordPress offers built-in functions for directly uploading media files without the media uploader. These functions can be used effectively to handle advanced media uploading requirements without spending much time. Template management WordPress offers a simple template management system for its themes. It is not as complex or fully featured as a typical template engine. However, it offers a wide range of capabilities in CMS development perspective, which we can extend to suit web applications. Database management In most scenarios, we will be using the existing database table structure for our application development. WordPress database management functionalities offer a quick and easy way of working with existing tables with its own style of functions. Unlike other frameworks, WordPress provides a built-in database structure, and hence most of the functionalities can be used to directly work with these tables without writing custom SQL queries. Routing Comprehensive support for routing is provided through permalinks. WordPress makes it simple to change the default routing and choose your own routing, in order to built search engine friendly URLs. XML-RPC API Building an API is essential for allowing third-party access to our application. WordPress provides built-in API for accessing CMS-related functionality through its XML-RPC interface. Also, developers are allowed to create custom API functions through plugins, making it highly flexible for complex applications. REST API REST API makes it possible to give third-party access to the application data, similar to XML-RPC API. This API uses easy to understand HTTP requests and JSON format making it easier to communicate with WordPress applications. JavaScript is becoming the modern trend in developing applications. So the availability of JSON in REST API will allow external users to access and manipulate WordPress data within their JavaScript based applications. Caching Caching in WordPress can be categorized into two sections called persistent and nonpersistent cache. Nonpersistent caching is provided by WordPress cache object while persistent caching is provided through its Transient API. Caching techniques in WordPress is a simple compared to other frameworks, but it's powerful enough to cater to complex web applications. Scheduling As developers, you might have worked with cron jobs for executing certain tasks at specified intervals. WordPress offers same scheduling functionality through built-in functions, similar to a cron job. However, WordPress cron execution is slightly different from normal cron jobs. In WordPress, cron won't be executed unless someone visits the site. Typically, it's used for scheduling future posts. However, it can be extended to cater complex scheduling functionality. Plugins and widgets The power of WordPress comes from its plugin mechanism, which allows us to dynamically add or remove functionality without interrupting other parts of the application. Widgets can be considered as a part of the plugin architecture and will be discussed in detail further in this article. Themes The design of a WordPress site comes through the theme. This site offers many built-in template files to cater to the default functionality. Themes can be easily extended for custom functionality. Also, the design of the site can be changed instantly by switching compatible theme. Actions and filters Actions and filters are part of the WordPress hook system. Actions are events that occur during a request. We can use WordPress actions to execute certain functionalities after a specific event is completed. On the other hand, filters are functions that are used to filter, modify, and return the data. Flexibility is one of the key reasons for the higher popularity of WordPress, compared to other CMS. WordPress has its own way of extending functionality of custom features as well as core features through actions and filters. These actions and filters allow the developers to build advanced applications and plugins, which can be easily extended with minor code changes. As a WordPress developer, it's a must to know the perfect use of these actions and filters in order to build highly flexible systems. The admin dashboard WordPress offers a fully featured backend for administrators as well as normal users. These interfaces can be easily customized to adapt to custom applications. All the application-related lists, settings, and data can be handled through the admin section. The overall collection of features provided by WordPress can be effectively used to match the core functionalities provided by full stack PHP frameworks. Identifying the components of WordPress WordPress comes up with a set of prebuilt components, which are intended to provide different features and functionality for an application. A flexible theme and powerful admin features act as the core of WordPress websites, while plugins and widgets extend the core with application-specific features. As a CMS, we all have a pretty good understanding of how these components fit into a WordPress website. Here our goal is to develop web applications with WordPress, and hence it is important to identify the functionality of these components in the perspective of web applications. So, we will look at each of the following components, how they fit into web applications, and how we can take advantage of them to create flexible applications through a rapid development process: The role of WordPress themes The role of admin dashboard The role of plugins The role of widgets The role of WordPress themes Most of us are used to seeing WordPress as a CMS. In its default view, a theme is a collection of files used to skin your web application layouts. In web applications, it's recommended to separate different components into layers such as models, views, and controllers. WordPress doesn't adhere to the MVC architecture. However, we can easily visualize themes or templates as the presentation layer of WordPress. In simple terms, views should contain the HTML needed to generate the layout and all the data it needs, should be passed to the views. WordPress is built to create content management systems, and hence, it doesn't focus on separating views from its business logic. Themes contain views, also known as template files, as a mix of both HTML code and PHP logic. As web application developers, we need to alter the behavior of existing themes, in order to limit the logic inside templates and use plugins to parse the necessary model data to views. Structure of a WordPress page layout Typically, posts or pages created in WordPress consist of five common sections. Most of these components will be common across all the pages in the website. In web applications, we also separate the common layout content into separate views to be included inside other views. It's important for us to focus on how we can adapt the layout into web application-specific structure. Let's visualize the common layout of WordPress using the following diagram: Having looked at the structure, it's obvious that Header, Footer, and the Main Contentarea are mandatory even for web applications. However, the Footerand Commentssection will play a less important role in web applications, compared to web pages. Sidebaris important in web applications, even though it won't be used with the same meaning. It can be quite useful as a dynamic widget area. Customizing the application layout Web applications can be categorized as projects and products. A project is something we develop targeting specific requirements of a client. On the other hand, a product is an application created based on the common set of requirements for wide range of users. Therefore, customizations will be required on layouts of your product based on different clients. WordPress themes make it simple to customize the layout and features using child themes. We can make the necessary modifications in the child theme while keeping the core layout in the parent theme. This will prevent any code duplications in customizing layouts. Also, the ability to switch themes is a powerful feature that eases the layout customization. The role of the admin dashboard The administration interface of an application plays one of the most important roles behind the scenes. WordPress offers one of the most powerful and easy-to-access admin areas amongst other competitive frameworks. Most of you should be familiar with using admin area for CMS functionalities. However, we will have to understand how each component in the admin area suits the development of web applications. The admin dashboard Dashboard is the location where all the users get redirected, once logged into admin area. Usually, it contains dynamic widget areas with the most important data of your application. Dashboard can play a major role in web applications, compared to blogging or CMS functionality. The dashboard contains a set of default widgets that are mainly focused on main WordPress features such as posts, pages, and comments. In web applications, we can remove the existing widgets related to CMS and add application-specific widgets to create a powerful dashboard. WordPress offers a well-defined API to create a custom admin dashboard widgets and hence we can create a very powerful dashboard using custom widgets for custom requirements in web applications. Posts and pages Posts in WordPress are built for creating content such as articles and tutorials. In web applications, posts will be the most important section to create different types of data. Often, we will choose custom post types instead of normal posts for building advanced data creation sections. On the other hand, pages are typically used to provide static content of the site. Usually, we have static pages such as About Us, Contact Us, Services, and so on. Users User management is a must use section for any kind of web application. User roles, capabilities and profiles will be managed in this section by the authorized users. Appearance Themes and application configurations will be managed in this section. Widgets and theme options will be the important sections related to web applications. Generally, widgets are used in sidebars of WordPress sites to display information such as recent members, comments, posts, and so on. However, in web applications, widgets can play a much bigger role as we can use widgets to split main template into multiple sections. Also, these types of widgetized areas become handy in applications where majority of features are implemented with AJAX. The theme options panel can be used as the general settings panel of web applications where we define the settings related to templates and generic site-specific configurations. Settings This section involves general application settings. Most of the prebuilt items in this section are suited for blogs and websites. We can customize this section to add new configuration areas related to our plugins, used in web application development. There are some other sections such as links, pages, and comments, which will not be used frequently in complex web application development. The ability to add new sections is one of the key reasons for its flexibility. The role of plugins In normal circumstances, WordPress developers use functions that involve application logic scattered across theme files and plugins. Even some of the developers change the core files of WordPress. Altering WordPress core files, third-party theme or plugin files is considered a bad practice since we lose all the modifications on version upgrades and it may break the compatibility of other parts of WordPress. In web applications, we need to be much more organized. In the Role of WordPress theme section, we discussed the purpose of having a theme for web applications. Plugins will be and should be used to provide the main logic and content of your application. The plugins architecture is a powerful way to add or remove features without affecting the core. Also, we have the ability to separate independent modules into their own plugins, making it easier to maintain. On top of this, plugins have the ability to extend other plugins. Since there are over 40,000 free plugins and large number of premium plugins, sometimes you don't have to develop anything for WordPress applications. You can just use number of plugins and integrate them properly to build advanced applications. The role of widgets The official documentation of WordPress refers to widgets as a component that adds content and features to your sidebar. In a typical blogging or CMS user's perspective, it's a completely valid statement. Actually, the widgets offer more in web applications by going beyond the content that populates sidebars. Modern WordPress themes provides wide range of built-in widgets for advanced functionality, making it much more easier to build applications. The following screenshot shows a typical widgetized sidebar of a website: We can use dynamic widgetized areas to include complex components as widgets, making it easy to add or remove features without changing source code. The following screenshot shows a sample dynamic widgetized area. We can use the same technique for developing applications with WordPress. Throughout these sections, we covered the main components of WordPress and how they fit into the actual web application development. Now, we have a good understanding of the components in order to plan our application developed throughout this article. A development plan for the forum management application In this article, our main goal is to learn how we can build full stack web applications using built-in WordPress features. Therefore, I thought of building a complete application, explaining each and every aspect of web development. We will develop an online forum management system for creating public forums or managing support forum for a specific product or service. This application can be considered as a mini version of a powerful forum system like bbPress. We will be starting the development of this application. Planning is a crucial task in web development, in which we will save a lot of time and avoid potential risks in the long run. First, we need to get a basic idea about the goal of this application, features and functionalities, and the structure of components to see how it fits into WordPress. Application goals and target audience Anyone who are using Internet on day to day basis knows the importance of online discussion boards, also known as forums. These forums allows us to participate in a large community and discuss common matters, either related to a specific subject or a product. The application developed throughout is intended to provide simple and flexible forum management application using a WordPress plugin with the goals of: Learning to develop a forum application Learning to use features of various online forums Learning to manage a forum for your product or service This application will be targeted towards all the people who have participated in an online forum or used a support system of a product they purchased. I believe that both output of this application and the contents will be ideal for the PHP developers who want to jump into WordPress application development. Summary Our main goal was to find how WordPress fits into web application development. We started this articleby identifying the CMS functionalities of WordPress. We explored the features and functionalities of popular full stack frameworks and compared them with the existing functionalities of WordPress. Then, we looked at the existing components and features of WordPress and how each of those components fit into a real-world web application. We also planned the forum management application requirements and identified the limitations in using WordPress for web applications. Finally, we converted the default interface into a question-answer interface in a rapid process using existing functionalities, without interrupting the default behavior of WordPress and themes. By now, you should be able to decide whether to choose WordPress for your web application, visualize how your requirements fits into components of WordPress, and identify and minimize the limitations. Resources for Article: Further resources on this subject: Creating Your Own Theme—A Wordpress Tutorial [article] Introduction to a WordPress application's frontend [article] Wordpress: Buddypress Courseware [article]
Read more
  • 0
  • 0
  • 42133
article-image-configuring-esp8266
Packt
14 Jun 2017
10 min read
Save for later

Configuring the ESP8266

Packt
14 Jun 2017
10 min read
In this article by Marco Schwartz the authors of the book ESP8266 Internet of Things Cookbook, we will learn following recipes: Setting up the Arduino development environment for the ESP8266 Choosing an ESP8266 Required additional components (For more resources related to this topic, see here.) Setting up the Arduino development environment for the ESP8266 To start us off, we will look at how to set up Arduino IDE development environment so that we can use it to program the ESP8266. This will involve installing the Arduino IDE and getting the board definitions for our ESP8266 module. Getting ready The first thing you should do is download the Arduino IDE if you do not already have it installed in your computer. You can do that from this link: https://www.arduino.cc/en/Main/Software. The webpage will appear as shown. It features that latest version of the Arduino IDE. Select your operating system and download the latest version that is available when you access the link (it was 1.6.13 at when this articlewas being written): When the download is complete, install the Arduino IDE and run it on your computer. Now that the installation is complete it is time to get the ESP8266 definitions. Open the preference window in the Arduino IDE from File|Preferences or by pressing CTRL+Comma. Copy this URL: http://arduino.esp8266.com/stable/package_esp8266com_index.json. Paste it in the filed labelled additional board manager URLs as shown in the figure. If you are adding other URLs too, use a comma to separate them: Open the board manager from the Tools|Board menu and install the ESP8266 platform. The board manager will download the board definition files from the link provided in the preferences window and install them. When the installation is complete the ESP8266 board definitions should appear as shown in the screenshot. Now you can select your ESP8266 board from Tools|Board menu: How it works… The Arduino IDE is an open source development environment used for programming Arduino boards and Arduino-based boards. It is also used to upload sketches to other open source boards, such as the ESP8266. This makes it an important accessory when creating Internet of Things projects. Choosing an ESP8266 board The ESP8266 module is a self-contained System On Chip (SOC) that features an integrated TCP/IP protocol stack that allows you to add Wi-Fi capability to your projects. The module is usually mounted on circuit boards that breakout the pins of the ESP8266 chip, making it easy for you program the chip and to interface with input and output devices. ESP8266 boards come in different forms depending on the company that manufactures them. All the boards use Espressif’s ESP8266 chip as the main controller, but have different additional components and different pin configurations, giving each board unique additional features. Therefore, before embarking on your IoT project, take some time to compare and contrast the different types of ESP8266 boards that are available. This way, you will be able to select the board that has features best suited for your project. Available options The simple ESP8266-01 module is the most basic ESP8266 board available in the market. It has 8 pins which include 4 General Purpose Input/Output (GPIO) pins, serial communication TX and RX pins, enable pin and power pins VCC and GND. Since it only has 4 GPIO pins, you can only connect three inputs or outputsto it. The 8-pin header on the ESP8266-01 module has a 2.0mm spacing which is not compatible with breadboards. Therefore, you have to look for another way to connect the ESP8266-01 module to your setup when prototyping. You can use female to male jumper wires to do that: The ESP8266-07 is an improved version of the ESP8266-01 module. It has 16 pins which comprise of 9 GPIO pins, serial communication TX and RX pins, a reset pin, an enable pin and power pins VCC and GND. One of the GPIO pins can be used as an analog input pin.The board also comes with a U.F.L. connector that you can use to plug an external antenna in case you need to boost Wi-Fi signal. Since the ESP8266 has more GPIO pins you can have more inputs and outputs in your project. Moreover, it supports both SPI and I2C interfaces which can come in handy if you want to use sensors or actuators that communicate using any of those protocols. Programming the board requires the use of an external FTDI breakout board based on USB to serial converters such as the FT232RL chip. The pads/pinholes of the ESP8266-07 have a 2.0mm spacing which is not breadboard friendly. To solve this, you have to acquire a plate holder that breaks out the ESP8266-07 pins to a breadboard compatible pin configuration, with 2.54mm spacing between the pins. This will make prototyping easier. This board has to be powered from a 3.3V which is the operating voltage for the ESP8266 chip: The Olimex ESP8266 module is a breadboard compatible board that features the ESP8266 chip. Just like the ESP8266-07 board, it has SPI, I2C, serial UART and GPIO interface pins. In addition to that it also comes with Secure Digital Input/Output (SDIO) interface which is ideal for communication with an SD card. This adds 6 extra pins to the configuration bringing the total to 22 pins. Since the board does not have an on-board USB to serial converter, you have to program it using an FTDI breakout board or a similar USB to serial board/cable. Moreover it has to be powered from a 3.3V source which is the recommended voltage for the ESP8266 chip: The Sparkfun ESP8266 Thing is a development board for the ESP8266 Wi-Fi SOC. It has 20 pins that are breadboard friendly, which makes prototyping easy. It features SPI, I2C, serial UART and GPIO interface pins enabling it to be interfaced with many input and output devices.There are 8 GPIO pins including the I2C interface pins. The board has a 3.3V voltage regulator which allows it to be powered from sources that provide more than 3.3V. It can be powered using a micro USB cable or Li-Po battery. The USB cable also charges the attached Li-Po battery, thanks to the Li-Po battery charging circuit on the board. Programming has to be done via an external FTDI board: The Adafruit feather Huzzah ESP8266 is a fully stand-alone ESP8266 board. It has built in USB to serial interface that eliminates the need for using an external FTDI breakout board to program it. Moreover, it has an integrated battery charging circuit that charges any connected Li-Po battery when the USB cable is connected. There is also a 3.3V voltage regulator on the board that allows the board to be powered with more than 3.3V. Though there are 28 breadboard friendly pins on the board, only 22 are useable. 10 of those pins are GPIO pins and can also be used for SPI as well as I2C interfacing. One of the GPIO pins is an analog pin: What to choose? All the ESP8266 boards will add Wi-Fi connectivity to your project. However, some of them lack important features and are difficult to work with. So, the best option would be to use the module that has the most features and is easy to work with. The Adafruit ESP8266 fits the bill. The Adafruit ESP8266 is completely stand-alone and easy to power, program and configure due to its on-board features. Moreover, it offers many input/output pins that will enable you to add more features to your projects. It is affordable andsmall enough to fit in projects with limited space. There’s more… Wi-Fi isn’t the only technology that we can use to connect out projects to the internet. There are other options such as Ethernet and 3G/LTE. There are shields and breakout boards that can be used to add these features to open source projects. You can explore these other options and see which works for you. Required additional components To demonstrate how the ESP8266 works we will use some addition components. These components will help us learn how to read sensor inputs and control actuators using the GPIO pins. Through this you can post sensor data to the internet and control actuators from the internet resources such as websites. Required components The components we will use include: Sensors DHT11 Photocell Soil humidity Actuators Relay Powerswitch tail kit Water pump Breadboard Jumper wires Micro USB cable Sensors Let us discuss the three sensors we will be using. DHT11 The DHT11 is a digital temperature and humidity sensor. It uses a thermistor and capacitive humidity sensor to monitor the humidity and temperature of the surrounding air and produces a digital signal on the data pin. A digital pin on the ESP8266 can be used to read the data from the sensor data pin: Photocell A photocell is a light sensor that changes its resistance depending on the amount of incident light it is exposed to. They can be used in a voltage divider setup to detect the amount of light in the surrounding. In a setup where the photocell is used in the Vcc side of the voltage divider, the output of the voltage divider goes high when the light is bright and low when the light is dim. The output of the voltage divider is connected to an analog input pin and the voltage readings can be read: Soil humidity sensor The soil humidity sensor is used for measuring the amount of moisture in soil and other similar materials. It has two large exposed pads that act as a variable resistor. If there is more moisture in the soil the resistance between the pads reduces, leading to higher output signal. The output signal is connected to an analog pin from where its value is read: Actuators Let’s discuss about the actuators. Relays A relay is a switch that is operated electrically. It uses electromagnetism to switch large loads using small voltages. It comprises of three parts: a coil, spring and contacts. When the coil is energized by a HIGH signal from a digital pin of the ESP8266 it attracts the contacts forcing them closed. This completes the circuit and turns on the connected load. When the signal on the digital pin goes LOW, the coil is no longer energized and the spring pulls the contacts apart. This opens the circuit and turns of the connected load: Power switch tail kit A power switch tail kit is a device that is used to control standard wall outlet devices with microcontrollers. It is already packaged to prevent you from having to mess around with high voltage wiring. Using it you can control appliances in your home using the ESP8266: Water pump A water pump is used to increase the pressure of fluids in a pipe. It uses a DC motor to rotate a fan and create a vacuum that sucks up the fluid. The sucked fluid is then forced to move by the fan, creating a vacuum again that sucks up the fluid behind it. This in effect moves the fluid from one place to another: Breadboard A breadboard is used to temporarily connect components without soldering. This makes it an ideal prototyping accessory that comes in handy when building circuits: Jumper wires Jumper wires are flexible wires that are used to connect different parts of a circuit on a breadboard: Micro USB cable A micro USB cable will be used to connect the Adafruit ESP8266 board to the compute: Summary In this article we have learned how to setting up the Arduino development environment for the ESP8266,choosing an ESP8266, and required additional components.  Resources for Article: Further resources on this subject: Internet of Things with BeagleBone [article] Internet of Things Technologies [article] BLE and the Internet of Things [article]
Read more
  • 0
  • 0
  • 44643

article-image-iot-analytics-cloud
Packt
14 Jun 2017
19 min read
Save for later

IoT Analytics for the Cloud

Packt
14 Jun 2017
19 min read
In this article by Andrew Minteer, author of the book Analytics for the Internet of Things (IoT), that you understand how your data is transmitted back to the corporate servers, you feel you have more of a handle on it. You also have a reference frame in your head on how it is operating out in the real world. (For more resources related to this topic, see here.) Your boss stops by again. "Is that rolling average job done running yet?", he asks impatiently. It used to run fine and finished in an hour three months ago. It has steadily taken longer and longer and now sometimes does not even finish. Today, it has been going on six hours and you are crossing your fingers. Yesterday it crashed twice with what looked like out of memory errors. You have talked to your IT group and finance group about getting a faster server with more memory. The cost would be significant and likely will take months to complete the process of going through purchasing, putting it on order, and having it installed. Your friend in finance is hesitant to approve it. The money was not budgeted for this fiscal year. You feel bad especially since this is the only analytic job causing you problems. It just runs once a month but produces key data. Not knowing what else to say, you give your boss a hopeful, strained smile and show him your crossed fingers. “It’s still running...that’s good, right?” This article is about the advantages to cloud based infrastructure for handling and analyzing IoT data. We will discuss cloud services including Amazon Web Services (AWS), Microsoft Azure, and Thingworx. You will learn how to implement analytics elastically to enable a wide variety of capabilities. This article will cover: Building elastic analytics Designing for scale Cloud security and analytics Key Cloud Providers Amazon AWS Microsoft Azure PTC ThingWorx Building elastic analytics IoT data volumes increase quickly. Analytics for IoT is particularly compute intensive at times that are difficult to predict. Business value is uncertain and requires a lot of experimentation to find the right implementation. Combine all that together and you need something that scales quickly, is dynamic and responsive to resource needs, and virtually unlimited capacity at just the right time. And all of that needs to be implemented quickly with a low cost and low maintenance needs. Enter the cloud. IoT Analytics and cloud infrastructure fit together like a hand in a glove. What is the cloud infrastructure? The National Institute of Standards and Technology defines five essential characteristics: On-demand self-service: You can provision things like servers and storage as needed and without interacting with someone. Broad network access: Your cloud resources are accessible over the internet (if enabled) by various methods such as web browser or mobile phone. Resource pooling: Cloud providers pool their servers and storage capacity across many customers using a multi-tenant model. Resources, both physical and virtual, are dynamically assigned and reassigned as needed. Specific location of resources is unknown and generally unimportant. Rapid elasticity: Your resources can be elastically created and destroyed. This can happen automatically as needed to meet demand. You can scale outward rapidly. You can also contract rapidly. Supply of resources is effectively unlimited from your viewpoint. Measured service: Resource usage is monitored, controlled, and reported by the cloud provider. You have access to the same information, providing transparency to your utilization. Cloud systems continuously optimize resources automatically. There is a notion of private clouds that exist on premises or custom built by a third party for a specific organization. For our concerns, we will be discussing public clouds only. By and large, most analytics will be done on public clouds so we will concentrate our efforts there. The capacity available at your fingertips on public clouds is staggering. AWS, as of June 2016, has an estimated 1.3 million servers online. These servers are thought to be three times more efficient than enterprise systems. Cloud providers own the hardware and maintain the network and systems required for the available services. You just have to provision what you need to use, typically through a web application. Providers offer different levels of abstractions. They offer lower level servers and storage where you have fine grained control. They also offer managed services that handle the provisioning of servers, networking, and storage for you. These are used in conjunction with each other without much distinction between the two. Hardware failures are handled automatically. Resources are transferred to new hardware and brought back online. The physical components become unimportant when you design for the cloud, it is abstracted away and you can focus on resource needs. The advantages to using the cloud: Speed: You can bring cloud resources online in minutes. Agility: The ability to quickly create and destroy resources leads to ease of experimentation. This increases the agility of analytics organizations. Variety of services: Cloud providers have many services available to support analytics workflows that can be deployed in minutes. These services manage hardware and storage needs for you. Global reach: You can extend the reach of analytics to the other side of the world with a few clicks. Cost control: You only pay for the resources you need at the time you need them. You can do more for less. To get an idea of the power that is at your fingertips, here is an architectural diagram of something NASA built on AWS as part of an outreach program to school children. Source: Amazon Web Services; https://aws.amazon.com/lex/ By speaking voice commands, it will communicate with a Mars Rover replica to retrieve IoT data such as temperature readings. The process includes voice recognition, natural speech generation from text, data storage and processing, interaction with IoT device, networking, security, and ability to send text messages. This was not a years worth of development effort, it was built by tying together cloud based services already in place. And it is not just for big, funded government agencies like NASA. All of these services and many more are available to you today if your analytics runs in the cloud. Elastic analytics concepts What do we mean by Elastic Analytics? Let’s define it as designing your analytics processes so scale is not a concern. You want your focus to be on the analytics and not on the underlying technology. You want to avoid constraining your analytics capability so it will fit within some set hardware limitations. Focus instead on potential value versus costs. Trade hardware constraints for cost constraints. You also want your analytics to be able to scale. It should go from supporting 100 IoT devices to 1 Million IoT devices without requiring any fundamental changes. All that should happen if the costs increase. This reduces complexity and increases maintainability. That translates into lower costs which enables you to do more analytics. More analytics increases the probability of finding value. Finding more value enables even more analytics. Virtuous circle! Some core Elastic Analytics concepts: Separate compute from storage: We are used to thinking about resources like laptop specifications. You buy one device that has 16GB memory and 500GB hard drive because you think that will meet 90% of your needs and it is the top of your budget. Cloud infrastructure abstracts that away. Doing analytics in the cloud is like renting a magic laptop where you can change 4GB memory into 16GB by snapping your fingers. Your rental bill increases for only the time you have it at 16GB. You snap your fingers again and drop it back down to 4GB to save some money. Your hard drive can grow and shrink independently of the memory specification. You are not stuck having to choose a good balance between them. You can match compute needs with requirements. Build for scale from the start: Use software, services, and programming code that can scale from 1 to 1 million without changes. Each analytic process you put in production has continuing maintenance efforts to it that will build up over time as you add more and more. Make it easy on yourself later on. You do not want to have to stop what you are doing to re-architect a process you built a year ago because it hit limits of scale. Make your bottleneck wetware not hardware: By wetware, we mean brain power. “My laptop doesn’t have enough memory to run the job” should never be the problem. It should always be “I haven’t figured it out yet, but I have several possibilities in test as we speak.” Manage to a spend budget not to available hardware: Use as many cloud resources as you need as long as it fits within your spend budget. There is no need to limit analytics to fit within a set number of servers when you run analytics in the cloud. Traditional enterprise architecture purchases hardware ahead of time which incurs a capital expense. Your finance guy does not (usually) like capital expense. You should not like it either, as it means a ceiling has just been set on what you can do (at least in the near term). Managing to spend means keeping an eye on costs, not on resource limitations. Expand when needed and make sure to contract quickly to keep costs down. Experiment, experiment, experiment: Create resources, try things out, kill them off if it does not work. Then try something else. Iterate to the right answer. Scale out resources to run experiments. Stretch when you need to. Bring it back down when you are done. If Elastic Analytics is done correctly, you will find your biggest limitations are Time and Wetware. Not hardware and capital. Design with the endgame in mind Consider how the analytics you develop in the cloud would end up if successful. Would it turn into a regularly updated dashboard? Would it be something deployed to run under certain conditions to predict customer behavior? Would it periodically run against a new set of data and send an alert if an anomaly is detected? When you list out the likely outcomes, think about how easy it would be to transition from the analytics in development to the production version that will be embedded in your standard processes. Choose tools and analytics that make that transition quick and easy. Designing for scale Following some key concepts will help keep changes to your analytics processes to a minimum, as your needs scale. Decouple key components Decoupling means separating functional groups into components so they are not dependent upon each other to operate. This allows functionality to change or new functionality to be added with minimal impact on other components. Encapsulate analytics Encapsulate means grouping together similar functions and activity into distinct units. It is a core principle of object oriented programming and you should employ it in analytics as well. The goal is to reduce complexity and simplify future changes. As your analytics develop, you will have a list of actions that is either transforming the data, running it through a model or algorithm, or reacting to the result. It can get complicated quickly. By encapsulating the analytics, it is easier to know where to make changes when needed down the road. You will also be able reconfigure parts of the process without affecting the other components. Encapsulation process is carried out in the following steps: Make a list of the steps. Organize them into groups. Think which groups are likely to change together. Separate the groups that are independent into their own process It is a good idea to have the data transformation steps separate from the analytical steps if possible. Sometimes the analysis is tightly tied to the data transformation and it does not make sense to separate, but in most cases it can be separated. The action steps based on the analysis results almost always should be separate. Each group of steps will also have its own resource needs. By encapsulating them and separating the processes, you can assign resources independently and scale more efficiently where you need it. You can do more with less. Decouple with message queues Decoupling encapsulated analytics processes with message queues has several advantages. It allows for change in any process without requiring the other ones to adjust. This is because there is no direct link between them. It also builds in some robustness in case one process has a failure. The queue can continue to expand without losing data while the down process restarts and nothing will be lost after things get going again. What is a message queue? Simple diagram of a message queue New data comes into a queue as a message, it goes into line for delivery, and then is delivered to the end server when it gets its turn. The process adding a message is called the publisher and the process receiving the message is called the subscriber. The message queue exists regardless of if the publisher or subscriber is connected and online. This makes it robust against intermittent connections (intentional or unintentional). The subscriber does not have to wait until the publisher is willing to chat and vice versa. The size of the queue can also grow and shrink as needed. If the subscriber gets behind, the queue just grows to compensate until it can catch up. This can be useful if there is a sudden burst in messages by the publisher. The queue will act as a buffer and expand to capture the messages while the subscriber is working through the sudden influx. There is a limit, of course. If the queue reaches some set threshold, it will reject (and you will most likely lose) any incoming messages until the queue gets back under control. A contrived but real world example of how this can happen: Joe Cut-rate (the developer): Hey, when do you want this doo-hickey device to wake up and report? Jim Unawares (the engineer): Every 4 hours Joe Cut-rate: No sweat. I’ll program it to start at 12am UTC, then every 4 hours after. How many of these you gonna sell again? Jim Unawares: About 20 million Joe Cut-rate: Um….friggin awesome! I better hardcode that 12am UTC then, huh? 4 months later Jim Unawares: We’re only getting data from 10% of the devices. And it is never the same 10%. What the heck? Angela the analyst: Every device in the world reports at exactly the same time, first thing I checked. The message queues are filling up since our subscribers can’t process that fast, new messages are dropped. If you hard coded the report time, we’re going to have to get the checkbook out to buy a ton of bandwidth for the queues. And we need to do it NOW since we are losing 90% of the data every 4 hours. You guys didn’t do that, did you? Although queues in practice typically operate with little lag, make sure the origination time of the data is tracked and not just the time the data was pulled off the queue. It can be tempting to just capture the time the message was processed to save space but that can cause problems for your analytics. Why is this important for analytics? If you only have the date and time the message was received by the subscribing server, it may not be as close as you think to the time the message was generated at the originating device. If there are recurring problems with message queues, the spread in time difference would ebb and flow without you being aware of it. You will be using time values extensively in predictive modeling. If the time values are sometimes accurate and sometimes off, the models will have a harder time finding predictive value in your data. Your potential revenue from repurposing the data can also be affected. Customers are unlikely to pay for a service tracking event times for them if it is not always accurate. There is a simple solution. Make sure the time the device sends the data is tracked along with the time the data is received. You can monitor delivery times to diagnose issues and keep a close eye on information lag times. For example, if you notice the delivery time steadily increases just before you get a data loss, it is probably the message queue filling up. If there is no change in delivery time before a loss, it is unlikely to be the queue. Another benefit to using the cloud is (virtually) unlimited queue sizes when use a managed queue service. This makes the situation described much less likely to occur. Distributed computing Also called cluster computing, distributed computing refers to spreading processes across multiple servers using frameworks that abstract the coordination of each individual server. The frameworks make it appear as if you are using one unified system. Under the covers, it could be a few servers (called nodes) to thousands. The framework handles that orchestration for you. Avoid containing analytics to one server The advantage to this for IoT analytics is in scale. You can add resources by adding nodes to the cluster, no change to the analytics code is required. Try and avoid containing analytics to one server (with a few exceptions). This puts a ceiling on scale. When to use distributed and when to use one server There is a complexity cost to distributed computing though. It is not as simple as single server analytics. Even though the frameworks handle a lot of the complexity for you, you still have to think and design your analytics to work across multiple nodes. Some guidelines on when to keep it simple on one server: There is not much need for scale: Your analytics needs little change even if the number of IoT devices and data explodes. For example, the analytics runs a forecast on data already summarized by month. The volume of devices makes little difference in that case. Small data instead of big data: The analytics runs on a small subset of data without much impact from data size. Analytics on random samples is an example. Resource needs are minimal: Even at orders of magnitude more data, you are unlikely to need more than what is available with a standard server. In that case, keep it simple. Assuming change is constant The world of IoT analytics moves quickly. The analytics you create today will change many times over as you get feedback on results and adapt to changing business conditions. Your analytics processes will need to change. Assume this will happen continuously and design for change. That brings us to the concept of continuous delivery. Continuous delivery is a concept from software development. It automates the release of code into production. The idea is to make change a regular process. Bring this concept into your analytics by keeping a set of simultaneous copies that you use to progress through three stages: Development: Keep a copy of your analytics for improving and trying out new things. Test: When ready, merge your improvements into this copy where the functionality stays the same but it is repeatedly tested. The testing ensures it is working as intended. Keeping a separate copy for test allows development to continue on other functionality. Master: This is the copy that goes into production. When you merge things from test to the Master copy, it is the same as putting it into live use. Cloud providers often have a continuous delivery service that can make this process simpler. For any software developer readers out there, this is a simplification of the Git Flow method, which is a little outside the scope of this article. If the author can drop a suggestion, it is worth some additional research to learn Git Flow and apply it to your analytics development in the cloud. Leverage managed services Cloud infrastructure providers, like AWS and Microsoft Azure, offer services for things like message queues, big data storage, and machine learning processing. The services handle the underlying resource needs like server and storage provisioning and also network requirements. You do not have to worry about how this happens under the hood and it scales as big as you need it. They also manage global distribution of services to ensure low latency. The following image shows the AWS regional data center locations combined with the underwater internet cabling. AWS Regional Data Center Locations and Underwater Internet Cables. Source: http://turnkeylinux.github.io/aws-datacenters/ This reduces the amount of things you have to worry about for analytics. It allows you to focus more on the business application and less on the technology. That is a good thing and you should take advantage of it. An example of a managed service is Amazon Simple Queue Service (SQS). SQS is a message queue where the underlying server, storage, and compute needs is managed automatically by AWS systems. You only need to setup and configure it which takes just a few minutes. Summary In this article, we reviewed what is meant by elastic analytics and the advantages to using cloud infrastructure for IoT analytics. Designing for scale was discussed along with distributed computing. The two main cloud providers were introduced, Amazon Web Services and Microsoft Azure. We also reviewed a purpose built software platform, ThingWorx, made for IoT devices, communications, and analysis. Resources for Article:  Further resources on this subject: Building Voice Technology on IoT Projects [article] IoT and Decision Science [article] Introducing IoT with Particle's Photon and Electron [article]
Read more
  • 0
  • 0
  • 26641

article-image-docker-swarm
Packt
14 Jun 2017
8 min read
Save for later

Docker Swarm

Packt
14 Jun 2017
8 min read
In this article by Russ McKendrick, the author of the book Docker Bootcamp, we will cover following topics: Creating a Swarm manually Launching a service (For more resources related to this topic, see here.) Creating a Swarm manually To start off with we need to launch the hosts and to do this, run the following commands, remembering to replace the digital ocean API access token with your own: docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm01 docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm02 docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm03 Once launched, running docker-machine ls should show you a list of your images. Also, this should be reflected in your digital ocean control panel: Now we have our Docker hosts and we need to assign a role to each of the nodes within the cluster. Docker Swarm has two node roles: Manager: A manager is a node which dispatches tasks to the workers, all your interaction with the Swarm cluster will be targeted against a manager node.You can have more than one manger node, however in this example we will be using just one. Worker: Worker nodes accept the tasks dispatched by the manager node, these are where all your services are launched. We will go in to services in more detail once we have our cluster configured. In our cluster, swarm01 will be the manager node with swarm02 and swarm03 being our two worker nodes. We are going to use the docker-machine ssh command to execute commands directly on our three nodes, starting with configuring our manager node. The commands in the walk through will only work with Mac and Linux, commands to run on Windows will be covered at the end of this section. Before we initialize the manager node, we need to capture the IP address of swarm01 as a command-line variable: managerIP=$(docker-machine ip swarm01) Now that we have the IP address, run the following command to check if it is correct: echo $managerIP And then to configure the manager node to run: docker-machine ssh swarm01 docker swarm init --advertise-addr $managerIP You will then receive confirmation that swarm01 is now a manager along with instructions on what to run to add a worker to the cluster: You don’t have to a make a note of the instructions as we will be running the command in a slightly different way. To add our two workers, we need to capture the join token in a similar way we captured the IP address of our manager node using the $managerIP variable, to do this run: joinToken=$(docker-machine ssh swarm01 docker swarm join-token -q worker) Again, you echo the variable out to check that it is valid: echo $joinToken Now it’s time to add our two worker nodes into the cluster by running: docker-machine ssh swarm02 docker swarm join --token $joinToken $managerIP:2377 docker-machine ssh swarm03 docker swarm join --token $joinToken $managerIP:2377 You should see something same as the following terminal output: Connecting your local Docker client to the manager node using: eval $(docker-machine env swarm01) and then running a docker-machine ls again shows. As you can see from the list of hosts,swarm01 is now active but there is nothing in the SWARM column, why is that? Confusingly, there are two different types of Docker Swarm cluster, there is the Legacy Docker Swarm which was managed by Docker machine, and then there is the new Docker Swarm mode which is managed by the Docker engine itself. We have a launched a Docker Swarm mode cluster, this is now the preferred way of launching Swarm, the legacy Docker Swarm is slowly being retired. To get a list of the nodes within our Swarm cluster we need to run the following command: For information on each node you can run the following command (the --pretty flag renders the JSON output from the Docker API): docker node inspect swarm01--pretty You are given a wealth of information about the host, including the fact that it is a manager and it has been launched in digital ocean. Running the same command, but for a worker node shows using similar information: docker node inspect swarm02 --pretty However, as the node is not a manager that section is missing. Before we look at launching services into our cluster we should look at how to launch our cluster using Docker machine on Windows as there are a few differences in the commands used due differences between powershell and bash. First, we need to launch the three hosts: docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm01 docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm02 docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm03 Once the three hosts are up and running: You can create the manager node by running: $managerIP = $(docker-machine.exe ip swarm01) echo $managerIP docker-machine.exe ssh swarm01 docker swarm init --advertise-addr $managerIP Once you have your manager you can add the two worker nodes: $joinIP = “$(docker-machine.exe ip swarm01):2377” echo $joinIP $joinToken = $(docker-machine.exe ssh swarm01 docker swarm join-token -q worker) echo $joinToken docker-machine.exe ssh swarm02 docker swarm join --token $joinToken $joinIP docker-machine.exe ssh swarm03 docker swarm join --token $joinToken $joinIP and then configure your local Docker client to use your manager node and check the cluster status: docker-machine.exe env --shell powershell swarm01 | Invoke-Expression docker-machine.exe ls docker node ls At this stage, no matter which operating system you are using, you should have a three node Docker Swarm cluster in digital ocean, we can now look at a launching service into our cluster. Launching a service Rather than launching containers using the docker container run command you need to create a service A service defines a task which the manager then passes to one of the workers and then a container is launched: docker service create --name cluster -p:80:80/tcp russmckendrick/cluster That’s it, we should now have a single container running on one of our three nodes. To check that the service is running and get a little more information about the service, run the following commands: docker service ls docker service inspect cluster --pretty Now that we have confirmed that our service is running, you will be able to open your browser and enter the IP address of one of your three nodes (which you can get by running docker-machine ls).One of the features of Docker Swarm is it’s routing mesh: A routing mesh? When we exposed the port using the -p:80:80/tcp flag, we did a little more than map port 80 on the host to port 80 on the container, we actually created a Swarm load balancer on port 80 across all of the hosts within the cluster. The Swarm load balancer then directs requests to containers within our cluster. Running the commands as shown following, should show you which tasks are running on which nodes, remember tasks are containers which have been launched by the service: docker node ps swarm01 docker node ps swarm02 docker node ps swarm03 Like me, you probably have your single task running on swarm01: We can make things more interesting by scaling our service to add more tasks, to do this simply run the following commands to scale and check our service: docker service scale cluster=6 docker service ls docker service inspect cluster --pretty As you should see, we now have 6 tasks running within our cluster service. Checking the nodes should show that the tasks are evenly distributed between our three nodes: docker node ps swarm01 docker node ps swarm02 docker node ps swarm03 Hitting refresh in your browser should also update the hostname under the Docker image change, another way of seeing this on Mac and Linux is to run the following command: curl -s http://$(docker-machine ip swarm01)/ | grep class= As you can see from the terminal in following output, our requests are being load balanced between the running tasks: Before we terminate our Docker Swarm cluster let’s look at another way we can launch services, before we do we need to remove the currently running service, to do this simply run: docker service rm cluster Summary In this article we have learned how to create a Swarm manually, and how to launch a service. Resources for Article: Further resources on this subject: Orchestration with Docker Swarm [article] Hands On with Docker Swarm [article] Introduction to Docker [article]
Read more
  • 0
  • 0
  • 4741
article-image-backpropagation-algorithm
Packt
08 Jun 2017
11 min read
Save for later

Backpropagation Algorithm

Packt
08 Jun 2017
11 min read
In this article by Gianmario Spacagna, Daniel Slater, Phuong Vo.T.H, and Valentino Zocca, the authors of the book Python Deep Learning, we will learnthe Backpropagation algorithmas it is one of the most important topics for multi-layer feed-forward neural networks. (For more resources related to this topic, see here.) Propagating the error back from last to first layer, hence the name Backpropagation. Backpropagation is one of the most difficult algorithms to understand at first, but all is needed is some knowledge of basic differential calculus and the chain rule. For a deep neural network the algorithm to set the weights is called the Backpropagation algorithm. The Backpropagation algorithm We have seen how neural networks can map inputs onto determined outputs, depending on fixed weights. Once the architecture of the neural network has been defined (feed-forward, number of hidden layers, number of neurons per layer), and once the activity function for each neuron has been chosen, we will need to set the weights that in turn will define the internal states for each neuron in the network. We will see how to do that for a 1-layer network and then how to extend it to a deep feed-forward network. For a deep neural network the algorithm to set the weights is called the Backpropagation algorithm, and we will discuss and explain this algorithm for most of this section as it is one of the most important topics for multilayer feed-forward neural networks. First, however, we will quickly discuss this for 1-layer neural networks. The general concept we need to understand is the following: every neural network is an approximation of a function, therefore each neural network will not be equal to the desired function, instead it will differ by some value. This value is called the error and the aim is to minimize this error. Since the error is a function of the weights in the neural network, we want to minimize the error with respect to the weights. The error function is a function of many weights, it is therefore a function of many variables. Mathematically, the set of points where this function is zero represents therefore a hypersurface and to find a minimum on this surface we want to pick a point and then follow a curve in the direction of the minimum. Linear regression To simplify things we are going to introduce matrix notation. Let x be the input, we can think of x as a vector. In the case of linear regression we are going to consider a single output neuron y, the set of weights w is therefore a vector of dimension the same as the dimension of x. The activation value is then defined as the inner product <x, w>. Let's say that for each input value x we want to output a target value t, while for each x the neural network will output a value y defined by the activity function chosen, in this case the absolute value of the difference (y-t) represents the difference between the predicted value and the actual value for the specific input example x. If we have m input values xi, each of them will have a target value ti. In this case we calculate the error using the mean squared error , where each yi is a function of w. The error is therefore a function of w and it is usually denoted with J(w). As mentioned above, this represents a hypersurface of dimension equal to the dimension of w (we are implicitly also considering the bias), and for each wj we need to find a curve that will lead towards the minimum of the surface. The direction in which a curve increases in a certain direction is given by its derivative with respect to that direction, in this case by: And in order to move towards the minimum we need to move in the opposite direction set by  for each wj. Let's calculate: If , then  and therefore: The notation can sometimes be confusing, especially the first time one sees it. The input is given by vectors xi, where the superscript indicated the ith example. Since x and w are vectors, the subscript indicates the jth coordinate of the vector. yi then represents the output of the neural network given the input xi, while ti represents the target, that is, the desired value corresponding to the input xi. In order to move towards the minimum, we need to move each weight in the direction of its derivative by a small amount l, called the learning rate, typically much smaller than 1, (say 0.1 or smaller). We can therefore drop the 2 in the derivative and incorporate it in the learning rate, to get the update rule therefore given by: or, more in general, we can write the update rule in matrix form as: where ∇ represents the vector of partial derivatives. This process is what is often called gradient descent. One last note, the update can be done after having calculated all the input vectors, however, in some cases, the weights could be updated after each example or after a defined preset number of examples. Logistic regression In logistic regression, the output is not continuous; rather it is defined as a set of classes. In this case, the activation function is not going to be the identity function like before, rather we are going to use the logistic sigmoid function. The logistic sigmoid function, as we have seen before, outputs a real value in (0,1) and therefore it can be interpreted as a probability function, and that is why it can work so well in a 2-class classification problem. In this case, the target can be one of two classes, and the output represents the probability that it be one of those two classes (say t=1).Let’s denote with σ(a), with a the activation value,the logistic sigmoid function, therefore, for each examplex, the probability that the output be the class y, given the weights w, is: We can write that equation more succinctly as: and, since for each sample xi the probabilities are independent, we have that the global probability is: If we take the natural log of the above equation (to turn products into sums), we get: The object is now to maximize this log to obtain the highest probability of predicting the correct results. Usually, this is obtained, as in the previous case, by using gradient descent to minimize the cost function defined by. As before, we calculate the derivative of the cost function with respect to the weights wj to obtain: In general, in case of a multi-class output t, with t a vector (t1, …, tn), we can generalize this equation using J (w) = −log(P( y x,w))= Ei,j ti j, log ( (di)) that brings to the update equation for the weights: This is similar to the update rule we have seen for linear regression. Backpropagation In the case of 1-layer, weight-adjustment was easy, as we could use linear or logistic regression and adjust the weights simultaneously to get a smaller error (minimizing the cost function). For multi-layer neural networks we can use a similar argument for the weights used to connect the last hidden layer to the output layer, as we know what we would like the output layer to be, but we cannot do the same for the hidden layers, as, a priori, we do not know what the values for the neurons in the hidden layers ought to be. What we do, instead, is calculate the error in the last hidden layer and estimate what it would be in the previous layer, propagating the error back from last to first layer, hence the name Backpropagation. Backpropagation is one of the most difficult algorithms to understand at first, but all is needed is some knowledge of basic differential calculus and the chain rule. Let's introduce some notation first. We denote with Jthe cost (error), with y the activity function that is defined on the activation value a (for example y could be the logistic sigmoid), which is a function of the weights w and the input x. Let's also define wi,j the weight between the ith input value and the jth output. Here we define input and output more generically than for 1-layer network, if wi,j connects a pair of successive layers in a feed-forward network, we denote as input the neurons on the first of the two successive layers, and output the neurons on the second of the two successive layers. In order not to make the notation too heavy, and have to denote on which layer each neuron is, we assume that the ith input yi is always in the layer preceding the layer containing the jth output yj The letter y is used to both denote an input and the activity function, and we can easily infer which one we mean by the contest. We also use subscripts i and jwhere we always have ibelonging to the layer preceding the layer containing the element with subscript j. We also use subscripts i and j, where we always have the element with subscript i belonging to the layer preceding the layer containing the element with subscript j. In this example, layer 1 represents the input, and layer 2 the output Using this notation, and the chain-rule for derivatives, for the last layer of our neural network we can write: Since we know that , we have: If y is the logistic sigmoid defined above, we get the same result we have already calculated at the end of the previous section, since we know the cost function and we can calculate all derivatives. For the previous layers the same formula holds: Since we know that and we know that  is the derivative of the activity function that we can calculate, all we need to calculate is the derivative . Let's notice that this is the derivative of the error with respect to the activation function in the second layer, and, if we can calculate this derivative for the last layer, and have a formula that allows us to calculate the derivative for one layer assuming we can calculate the derivative for the next, we can calculate all the derivatives starting from the last layer and move backwards. Let us notice that, as we defined the yj, they are the activation values for the neurons in the second layer, but they are also the activity functions, therefore functions of the activation values in the first layer. Therefore, applying the chain rule, we have: and once again we can calculate both and, so , once we knowwe can calculate, and since we can calculate for the last layer, we can move backward and calculate for any layer and therefore  for any layer. Summarizing, if we have a sequence of layers where: We then have these two fundamental equations, where the summation in the second equation should read as the sum over all the outgoing connections fromyj to any neuron yk in the successive layer: By using these two equations we can calculate the derivatives for the cost with respect to each layer. If we set ,   represents the variation of the cost with respect to the activation value, and we can think of as the error at the neuron yj. We can then rewrite as: which implies that . These two equations give an alternate way of seeing Backpropagation, as the variation of the cost with respect to the activation value, and provide a formula to calculate this variation for any layer once we know the variation for the following layer: We can also combine these equations and show that: The Backpropagation algorithm for updating the weights is then given on each layer by: In the last section we will provide a code example that will help understand and apply these concepts and formulas. Summary At the end of this articlewe learnt the post neural networks architecture phaseand the use of the Backpropagation algorithm and we saw see how we can stack many layers to create and use deep feed-forward neural networks, and how a neural network can have many layers, and why inner (hidden) layers are important. Resources for Article: Further resources on this subject: Basics of Jupyter Notebook and Python [article] Jupyter and Python Scripting [article] Getting Started with Python Packages [article]
Read more
  • 0
  • 0
  • 48284

article-image-ionic-components
Packt
08 Jun 2017
16 min read
Save for later

Ionic Components

Packt
08 Jun 2017
16 min read
In this article by Gaurav Saini the authors of the book Hybrid Mobile Development with Ionic, we will learn following topics: Building vPlanet Commerce Ionic 2 components (For more resources related to this topic, see here.) Building vPlanet Commerce The vPlanet Commerce app is an e-commerce app which will demonstrate various Ionic components integrated inside the application and also some third party components build by the community. Let’s start by creating the application from scratch using sidemenu template: You now have the basic application ready based on sidemenu template, next immediate step I took if to take reference from ionic-conference-app for building initial components of the application such aswalkthrough. Let’s create a walkthrough component via CLI generate command: $ ionic g page walkthrough As, we get started with the walkthrough component we need to add logic to show walkthrough component only the first time when user installs the application: // src/app/app.component.ts // Check if the user has already seen the walkthrough this.storage.get('hasSeenWalkThrough').then((hasSeenWalkThrough) => { if (hasSeenWalkThrough) { this.rootPage = HomePage; } else { this.rootPage = WalkThroughPage; } this.platformReady(); }) So, we store a boolean value while checking if user has seen walkthrough first time or not. Another important thing we did create Events for login and logout, so that when user logs into the application and we can update Menu items accordingly or any other data manipulation to be done: // src/app/app.component.ts export interface PageInterface { title: string; component: any; icon: string; logsOut?: boolean; index?: number; tabComponent?: any; } export class vPlanetApp { loggedInPages: PageInterface[] = [ { title: 'account', component: AccountPage, icon: 'person' }, { title: 'logout', component: HomePage, icon: 'log-out', logsOut: true } ]; loggedOutPages: PageInterface[] = [ { title: 'login', component: LoginPage, icon: 'log-in' }, { title: 'signup', component: SignupPage, icon: 'person-add' } ]; listenToLoginEvents() { this.events.subscribe('user:login', () => { this.enableMenu(true); }); this.events.subscribe('user:logout', () => { this.enableMenu(false); }); } enableMenu(loggedIn: boolean) { this.menu.enable(loggedIn, 'loggedInMenu'); this.menu.enable(!loggedIn, 'loggedOutMenu'); } // For changing color of Active Menu isActive(page: PageInterface) { if (this.nav.getActive() && this.nav.getActive().component === page.component) { return 'primary'; } return; } } Next we have inside our app.html we have multiple <ion-menu> items depending upon whether user is loggedin or logout: // src/app/app.html<!-- logged out menu --> <ion-menu id="loggedOutMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>{{'menu' | translate}}</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedOutPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> <!-- logged in menu --> <ion-menu id="loggedInMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>Menu</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedInPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> As, our app start mainly from app.html so we declare rootPage here: <!-- main navigation --> <ion-nav [root]="rootPage" #content swipeBackEnabled="false"></ion-nav> Let’s now look into what all pages, services, and filter we will be having inside our app. Rather than mentioning it as a bullet list, the best way to know this is going through app.module.ts file which has all the declarations, imports, entryComponents and providers. // src/app/app.modules.ts import { NgModule, ErrorHandler } from '@angular/core'; import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular'; import { TranslateModule, TranslateLoader, TranslateStaticLoader } from 'ng2-translate/ng2-translate'; import { Http } from '@angular/http'; import { CloudSettings, CloudModule } from '@ionic/cloud-angular'; import { Storage } from '@ionic/storage'; import { vPlanetApp } from './app.component'; import { AboutPage } from '../pages/about/about'; import { PopoverPage } from '../pages/popover/popover'; import { AccountPage } from '../pages/account/account'; import { LoginPage } from '../pages/login/login'; import { SignupPage } from '../pages/signup/signup'; import { WalkThroughPage } from '../pages/walkthrough/walkthrough'; import { HomePage } from '../pages/home/home'; import { CategoriesPage } from '../pages/categories/categories'; import { ProductsPage } from '../pages/products/products'; import { ProductDetailPage } from '../pages/product-detail/product-detail'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; import { CheckoutPage } from '../pages/checkout/checkout'; import { ProductsFilterPage } from '../pages/products-filter/products-filter'; import { SupportPage } from '../pages/support/support'; import { SettingsPage } from '../pages/settings/settings'; import { SearchPage } from '../pages/search/search'; import { UserService } from '../providers/user-service'; import { DataService } from '../providers/data-service'; import { OrdinalPipe } from '../filters/ordinal'; // 3rd party modules import { Ionic2RatingModule } from 'ionic2-rating'; export function createTranslateLoader(http: Http) { return new TranslateStaticLoader(http, './assets/i18n', '.json'); } // Configure database priority export function provideStorage() { return new Storage(['sqlite', 'indexeddb', 'localstorage'], { name: 'vplanet' }) } const cloudSettings: CloudSettings = { 'core': { 'app_id': 'f8fec798' } }; @NgModule({ declarations: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage, OrdinalPipe, ], imports: [ IonicModule.forRoot(vPlanetApp), Ionic2RatingModule, TranslateModule.forRoot({ provide: TranslateLoader, useFactory: createTranslateLoader, deps: [Http] }), CloudModule.forRoot(cloudSettings) ], bootstrap: [IonicApp], entryComponents: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage ], providers: [ {provide: ErrorHandler, useClass: IonicErrorHandler}, { provide: Storage, useFactory: provideStorage }, UserService, DataService ] }) export class AppModule {} Ionic components There are many Ionic JavaScript components which we can effectively use while building our application. What's best is to look around for features we will be needing in our application. Let’s get started with Home page of our e-commerce application which will be having a image slider having banners on it. Slides Slides component is multi-section container which can be used in multiple scenarios same astutorial view or banner slider. <ion-slides> component have multiple <ion-slide> elements which can be dragged or swipped left/right. Slides have multiple configuration options available which can be passed in the ion-slides such as autoplay, pager, direction: vertical/horizontal, initialSlide and speed. Using slides is really simple as we just have to include it inside our home.html, no dependency is required for this to be included in the home.ts file: <ion-slides pager #adSlider (ionSlideDidChange)="logLenth()" style="height: 250px"> <ion-slide *ngFor="let banner of banners"> <img [src]="banner"> </ion-slide> </ion-slides> // Defining banners image path export class HomePage { products: any; banners: String[]; constructor() { this.banners = [ 'assets/img/banner-1.webp', 'assets/img/banner-2.webp', 'assets/img/banner-3.webp' ] } } Lists Lists are one of the most used components in many applications. Inside lists we can display rows of information. We will be using lists multiple times inside our application such ason categories page where we are showing multiple sub-categories: // src/pages/categories/categories.html <ion-content class="categories"> <ion-list-header *ngIf="!categoryList">Fetching Categories ....</ion-list-header> <ion-list *ngFor="let cat of categoryList"> <ion-list-header>{{cat.name}}</ion-list-header> <ion-item *ngFor="let subCat of cat.child"> <ion-avatar item-left> <img [src]="subCat.image"> </ion-avatar> <h2>{{subCat.name}}</h2> <p>{{subCat.description}}</p> <button ion-button clear item-right (click)="goToProducts(subCat.id)">View</button> </ion-item> </ion-list> </ion-content> Loading and toast Loading component can be used to indicate some activity while blocking any user interactions. One of the most common cases of using loading component is HTTP/ calls to the server, as we know  it takes time to fetch data from server, till then for good user experience we can show some content showing Loading .. or Login wait .. for login pages. Toast is a small pop-up which provides feedback, usually used when some action  is performed by the user. Ionic 2 now provides toast component as part of its library, previously we have to use native Cordova plugin for toasts which in either case can now be used also. Loading and toast component both have a method create. We have to provide options  while creating these components: // src/pages/login/login.ts import { Component } from '@angular/core'; import { NgForm } from '@angular/forms'; import { NavController, LoadingController, ToastController, Events } from 'ionic-angular'; import { SignupPage } from '../signup/signup'; import { HomePage } from '../home/home'; import { Auth, IDetailedError } from '@ionic/cloud-angular'; import { UserService } from '../../providers/user-service'; @Component({ selector: 'page-user', templateUrl: 'login.html' }) export class LoginPage { login: {email?: string, password?: string} = {}; submitted = false; constructor(public navCtrl: NavController, public loadingCtrl: LoadingController, public auth: Auth, public userService: UserService, public toastCtrl: ToastController, public events: Events) { } onLogin(form: NgForm) { this.submitted = true; if (form.valid) { // start Loader let loading = this.loadingCtrl.create({ content: "Login wait...", duration: 20 }); loading.present(); this.auth.login('basic', this.login).then((result) => { // user is now registered this.navCtrl.setRoot(HomePage); this.events.publish('user:login'); loading.dismiss(); this.showToast(undefined); }, (err: IDetailedError<string[]>) => { console.log(err); loading.dismiss(); this.showToast(err) }); } } showToast(response_message:any) { let toast = this.toastCtrl.create({ message: (response_message ? response_message : "Log In Successfully"), duration: 1500 }); toast.present(); } onSignup() { this.navCtrl.push(SignupPage); } } As, you can see from the previouscode creating a loader and toast is almost similar at code level. The options provided while creating are also similar, we have used loader here while login and toast after that to show the desired message. Setting duration option is good to use, as in case loader is dismissed or not handled properly in code then we will block the user for any further interactions on app. In HTTP calls to server we might get connection issues or failure cases, in that scenario it may end up blocking users. Tabs versussegments Tabs are easiest way to switch between views and organise content at higher level. On the other hand segment is a group of button and can be treated as a local  switch tabs inside a particular component mainly used as a filter. With tabs we can build quick access bar in the footer where we can place Menu options such as Home, Favorites, and Cart. This way we can have one click access to these pages or components. On the other hand we can use segments inside the Account component and divide the data displayed in three segments profile, orders and wallet: // src/pages/account/account.html <ion-header> <ion-navbar> <button menuToggle> <ion-icon name="menu"></ion-icon> </button> <ion-title>Account</ion-title> </ion-navbar> <ion-toolbar [color]="isAndroid ? 'primary' : 'light'" no-border-top> <ion-segment [(ngModel)]="account" [color]="isAndroid ? 'light' : 'primary'"> <ion-segment-button value="profile"> Profile </ion-segment-button> <ion-segment-button value="orders"> Orders </ion-segment-button> <ion-segment-button value="wallet"> Wallet </ion-segment-button> </ion-segment> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <div [ngSwitch]="account"> <div padding-top text-center *ngSwitchCase="'profile'" > <img src="http://www.gravatar.com/avatar?d=mm&s=140"> <h2>{{username}}</h2> <ion-list inset> <button ion-item (click)="updatePicture()">Update Picture</button> <button ion-item (click)="changePassword()">Change Password</button> <button ion-item (click)="logout()">Logout</button> </ion-list> </div> <div padding-top text-center *ngSwitchCase="'orders'" > // Order List data to be shown here </div> <div padding-top text-center *ngSwitchCase="'wallet'"> // Wallet statement and transaction here. </div> </div> </ion-content> This is how we define a segment in Ionic, we don’t need to define anything inside the typescript file for this component. On the other hand with tabs we have to assign a component for  each tab and also can access its methods via Tab instance. Just to mention,  we haven’t used tabs inside our e-commerce application as we are using side menu. One good example will be to look in ionic-conference-app (https://github.com/driftyco/ionic-conference-app) you will find sidemenu and tabs both in single application: / // We currently don’t have Tabs component inside our e-commerce application // Below is sample code about how we can integrate it. <ion-tabs #showTabs tabsPlacement="top" tabsLayout="icon-top" color="primary"> <ion-tab [root]="Home"></ion-tab> <ion-tab [root]="Wishlist"></ion-tab> <ion-tab [root]="Cart"></ion-tab> </ion-tabs> import { HomePage } from '../pages/home/home'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; export class TabsPage { @ViewChild('showTabs') tabRef: Tabs; // this tells the tabs component which Pages // should be each tab's root Page Home = HomePage; Wishlist = WishlistPage; Cart = ShowcartPage; constructor() { } // We can access multiple methods via Tabs instance // select(TabOrIndex), previousTab(trimHistory), getByIndex(index) // Here we will console the currently selected Tab. ionViewDidEnter() { console.log(this.tabRef.getSelected()); } } Properties can be checked in the documentation (https://ionicframework.com/docs/v2/api/components/tabs/Tabs/) as, there are many properties available for tabs, like mode, color, tabsPlacement and tabsLayout. Similarly we can configure some tabs properties at Config level also, you will find here what all properties you can configure globally or for specific platform. (https://ionicframework.com/docs/v2/api/config/Config/). Alerts Alerts are the components provided in Ionic for showing trigger alert, confirm, prompts or some specific actions. AlertController can be imported from ionic-angular which allow us to programmatically create and show alerts inside the application. One thing to note here is these are JavaScript pop-up not the native platform pop-up. There is a Cordova plugin cordova-plugin-dialogs (https://ionicframework.com/docs/v2/native/dialogs/) which you can use if native dialog UI elements are required. Currently five types of alerts we can show in Ionic app basic alert, prompt alert, confirmation alert, radio and checkbox alerts: // A radio alert inside src/pages/products/products.html for sorting products <ion-buttons> <button ion-button full clear (click)="sortBy()"> <ion-icon name="menu"></ion-icon>Sort </button> </ion-buttons> // onClick we call sortBy method // src/pages/products/products.ts import { NavController, PopoverController, ModalController, AlertController } from 'ionic-angular'; export class ProductsPage { constructor( public alertCtrl: AlertController ) { sortBy() { let alert = this.alertCtrl.create(); alert.setTitle('Sort Options'); alert.addInput({ type: 'radio', label: 'Relevance', value: 'relevance', checked: true }); alert.addInput({ type: 'radio', label: 'Popularity', value: 'popular' }); alert.addInput({ type: 'radio', label: 'Low to High', value: 'lth' }); alert.addInput({ type: 'radio', label: 'High to Low', value: 'htl' }); alert.addInput({ type: 'radio', label: 'Newest First', value: 'newest' }); alert.addButton('Cancel'); alert.addButton({ text: 'OK', handler: data => { console.log(data); // Here we can call server APIs with sorted data // using the data which user applied. } }); alert.present().then(() => { // Here we place any function that // need to be called as the alert in opened. }); } } Cancel and OK buttons. We have used this here for sorting the products according to relevance price or other sorting values. We can prepare custom alerts also, where we can mention multiple options. Same as in previous example we have five radio options, similarly we can even add a text input box for taking some inputs and submit it. Other than this, while creating alerts remember that there are alert, input and button options properties for all the alerts present in the AlertController component.(https://ionicframework.com/docs/v2/api/components/alert/AlertController/). Some alert options: title:// string: Title of the alert. subTitle:// string(optional): Sub-title of the popup. Message:// string: Message for the alert cssClass:// string: Custom CSS class name inputs:// array: Set of inputs for alert. Buttons:// array(optional): Array of buttons Cards and badges Cards are one of the important component used more often in mobile and web applications. The reason behind cards are so popular because its a great way to organize information and get the users access to quantity of information on smaller screens also. Cards are really flexible and responsive due to all these reasons they are adopted very quickly by developers and companies. We will also be using cards inside our application on home page itself for showing popular products. Let’s see what all different types of cards Ionic provides in its library: Basic cards Cards with header and Footer Cards lists Cards images Background cards Social and map cards Social and map cards are advanced cards, which is build with custom CSS. We can develop similar advance card also. // src/pages/home/home.html <ion-card> <img [src]="prdt.imageUrl"/> <ion-card-content> <ion-card-title no-padding> {{prdt.productName}} </ion-card-title> <ion-row no-padding class="center"> <ion-col> <b>{{prdt.price | currency }} &nbsp; </b><span class="dis count">{{prdt.listPrice | currency}}</span> </ion-col> </ion-row> </ion-card-content> </ion-card> We have used here image card with a image on top and below we have favorite and view button icons. Similarly, we can use different types of cards where ever its required. Also, at the same time we can customize our cards and mix two types of card using their specific CSS classes or elements. Badges are small component used to show small information, for example showing number of items in cart above the cart icon. We have used it in our e-commerce application for showing the ratings of product. <ion-badge width="25">4.1</ion-badge> Summary In this article we have learned, building vPlanet Commerce and Ionic components. Resources for Article: Further resources on this subject: Lync 2013 Hybrid and Lync Online [article] Optimizing JavaScript for iOS Hybrid Apps [article] Creating Mobile Dashboards [article]
Read more
  • 0
  • 0
  • 36198
Modal Close icon
Modal Close icon