Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-welcome-spring-framework
Packt
30 Apr 2015
17 min read
Save for later

Welcome to the Spring Framework

Packt
30 Apr 2015
17 min read
In this article by Ravi Kant Soni, author of the book Learning Spring Application Development, you will be closely acquainted with the Spring Framework. Spring is an open source framework created by Rod Johnson to address the complexity of enterprise application development. Spring is now a long time de facto standard for Java enterprise software development. The framework was designed with developer productivity in mind and this makes it easier to work with the existing Java and JEE APIs. Using Spring, we can develop standalone applications, desktop applications, two tier applications, web applications, distributed applications, enterprise applications, and so on. (For more resources related to this topic, see here.) Features of the Spring Framework Lightweight: Spring is described as a lightweight framework when it comes to size and transparency. Lightweight frameworks reduce complexity in application code and also avoid unnecessary complexity in their own functioning. Non intrusive: Non intrusive means that your domain logic code has no dependencies on the framework itself. Spring is designed to be non intrusive. Container: Spring's container is a lightweight container, which contains and manages the life cycle and configuration of application objects. Inversion of control (IoC): Inversion of Control is an architectural pattern. This describes the Dependency Injection that needs to be performed by external entities instead of creating dependencies by the component itself. Aspect-oriented programming (AOP): Aspect-oriented programming refers to the programming paradigm that isolates supporting functions from the main program's business logic. It allows developers to build the core functionality of a system without making it aware of the secondary requirements of this system. JDBC exception handling: The JDBC abstraction layer of the Spring Framework offers a exceptional hierarchy that simplifies the error handling strategy. Spring MVC Framework: Spring comes with an MVC web application framework to build robust and maintainable web applications. Spring Security: Spring Security offers a declarative security mechanism for Spring-based applications, which is a critical aspect of many applications. ApplicationContext ApplicationContext is defined by the org.springframework.context.ApplicationContext interface. BeanFactory provides a basic functionality, while ApplicationContext provides advance features to our spring applications, which make them enterprise-level applications. Create ApplicationContext by using the ClassPathXmlApplicationContext framework API. This API loads the beans configuration file and it takes care of creating and initializing all the beans mentioned in the configuration file: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;   public class MainApp {   public static void main(String[] args) {      ApplicationContext context =    new ClassPathXmlApplicationContext("beans.xml");      HelloWorld helloWorld =    (HelloWorld) context.getBean("helloworld");      helloWorld.getMessage(); } } Autowiring modes There are five modes of autowiring that can be used to instruct Spring Container to use autowiring for Dependency Injection. You use the autowire attribute of the <bean/> element to specify the autowire mode for a bean definition. The following table explains the different modes of autowire: Mode Description no By default, the Spring bean autowiring is turned off, meaning no autowiring is to be performed. You should use the explicit bean reference called ref for wiring purposes. byName This autowires by the property name. If the bean property is the same as the other bean name, autowire it. The setter method is used for this type of autowiring to inject dependency. byType Data type is used for this type of autowiring. If the data type bean property is compatible with the data type of the other bean, autowire it. Only one bean should be configured for this type in the configuration file; otherwise, a fatal exception will be thrown. constructor This is similar to the byType autowire, but here a constructor is used to inject dependencies. autodetect Spring first tries to autowire by constructor; if this does not work, then it tries to autowire by byType. This option is deprecated. Stereotype annotation Generally, @Component, a parent stereotype annotation, can define all beans. The following table explains the different stereotype annotations: Annotation Use Description @Component Type This is a generic stereotype annotation for any Spring-managed component. @Service Type This stereotypes a component as a service and is used when defining a class that handles the business logic. @Controller Type This stereotypes a component as a Spring MVC controller. It is used when defining a controller class, which composes of a presentation layer and is available only on Spring MVC. @Repository Type This stereotypes a component as a repository and is used when defining a class that handles the data access logic and provide translations on the exception occurred at the persistence layer. Annotation-based container configuration For a Spring IoC container to recognize annotation, the following definition must be added to the configuration file: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans.xsd    http://www.springframework.org/schema/context    http://www.springframework.org/schema/context/spring-context-    3.2.xsd">   <context:annotation-config />                             </beans> Aspect-oriented programming (AOP) supports in Spring AOP is used in Spring to provide declarative enterprise services, especially as a replacement for EJB declarative services. Application objects do what they're supposed to do—perform business logic—and nothing more. They are not responsible for (or even aware of) other system concerns, such as logging, security, auditing, locking, and event handling. AOP is a methodology of applying middleware services, such as security services, transaction management services, and so on on the Spring application. Declaring an aspect An aspect can be declared by annotating the POJO class with the @Aspect annotation. This aspect is required to import the org.aspectj.lang.annotation.aspect package. The following code snippet represents the aspect declaration in the @AspectJ form: import org.aspectj.lang.annotation.Aspect; import org.springframework.stereotype.Component;   @Aspect @Component ("myAspect") public class AspectModule { // ... } JDBC with the Spring Framework The DriverManagerDataSource class is used to configure the DataSource for application, which is defined in the Spring.xml configuration file. The central class of Spring JDBC's abstraction framework is the JdbcTemplate class that includes the most common logic in using the JDBC API to access data (such as handling the creation of connection, creation of statement, execution of statement, and release of resources). The JdbcTemplate class resides in the org.springframework.jdbc.core package. JdbcTemplate can be used to execute different types of SQL statements. DML is an abbreviation of data manipulation language and is used to retrieve, modify, insert, update, and delete data in a database. Examples of DML are SELECT, INSERT, or UPDATE statements. DDL is an abbreviation of data definition language and is used to create or modify the structure of database objects in a database. Examples of DDL are CREATE, ALTER, and DROP statements. The JDBC batch operation in Spring The JDBC batch operation allows you to submit multiple SQL DataSource to process at once. Submitting multiple SQL DataSource together instead of separately improves the performance: JDBC with batch processing Hibernate with the Spring Framework Data persistence is an ability of an object to save its state so that it can regain the same state. Hibernate is one of the ORM libraries that is available to the open source community. Hibernate is the main component available for a Java developer with features such as POJO-based approach and supports relationship definitions. The object query language used by Hibernate is called as Hibernate Query Language (HQL). HQL is an SQL-like textual query language working at a class level or a field level. Let's start learning the architecture of Hibernate. Hibernate annotations is the powerful way to provide the metadata for the object and relational table mapping. Hibernate provides an implementation of the Java Persistence API so that we can use JPA annotations with model beans. Hibernate will take care of configuring it to be used in CRUD operations. The following table explains JPA annotations: JPA annotation Description @Entity The javax.persistence.Entity annotation is used to mark a class as an entity bean that can be persisted by Hibernate, as Hibernate provides the JPA implementation. @Table The javax.persistence.Table annotation is used to define table mapping and unique constraints for various columns. The @Table annotation provides four attributes, which allows you to override the name of the table, its catalogue, and its schema. This annotation also allows you to enforce unique constraints on columns in the table. For now, we will just use the table name as Employee. @Id Each entity bean will have a primary key, which you annotate on the class with the @Id annotation. The javax.persistence.Id annotation is used to define the primary key for the table. By default, the @Id annotation will automatically determine the most appropriate primary key generation strategy to be used. @GeneratedValue javax.persistence.GeneratedValue is used to define the field that will be autogenerated. It takes two parameters, that is, strategy and generator. The GenerationType.IDENTITY strategy is used so that the generated id value is mapped to the bean and can be retrieved in the Java program. @Column javax.persistence.Column is used to map the field with the table column. We can also specify the length, nullable, and uniqueness for the bean properties. Object-relational mapping (ORM, O/RM, and O/R mapping) ORM stands for Object-relational Mapping. ORM is the process of persisting objects in a relational database such as RDBMS. ORM bridges the gap between object and relational schemas, allowing object-oriented application to persist objects directly without having the need to convert object to and from a relational format: Hibernate Query Language (HQL) Hibernate Query Language (HQL) is an object-oriented query language that works on persistence object and their properties instead of operating on tables and columns. To use HQL, we need to use a query object. Query interface is an object-oriented representation of HQL. The query interface provides many methods; let's take a look at a few of them: Method Description public int executeUpdate() This is used to execute the update or delete query public List list() This returns the result of the relation as a list public Query setFirstResult(int rowno) This specifies the row number from where a record will be retrieved public Query setMaxResult(int rowno) This specifies the number of records to be retrieved from the relation (table) public Query setParameter(int position, Object value) This sets the value to the JDBC style query parameter public Query setParameter(String name, Object value) This sets the value to a named query parameter The Spring Web MVC Framework Spring Framework supports web application development by providing comprehensive and intensive support. The Spring MVC framework is a robust, flexible, and well-designed framework used to develop web applications. It's designed in such a way that development of a web application is highly configurable to Model, View, and Controller. In an MVC design pattern, Model represents the data of a web application, View represents the UI, that is, user interface components, such as checkbox, textbox, and so on, that are used to display web pages, and Controller processes the user request. Spring MVC framework supports the integration of other frameworks, such as Struts and WebWork, in a Spring application. This framework also helps in integrating other view technologies, such as Java Server Pages (JSP), velocity, tiles, and FreeMarker in a Spring application. The Spring MVC Framework is designed around a DispatcherServlet. The DispatcherServlet dispatches the http request to handler, which is a very simple controller interface. The Spring MVC Framework provides a set of the following web support features: Powerful configuration of framework and application classes: The Spring MVC Framework provides a powerful and straightforward configuration of framework and application classes (such as JavaBeans). Easier testing: Most of the Spring classes are designed as JavaBeans, which enable you to inject the test data using the setter method of these JavaBeans classes. The Spring MVC framework also provides classes to handle the Hyper Text Transfer Protocol (HTTP) requests (HttpServletRequest), which makes the unit testing of the web application much simpler. Separation of roles: Each component of a Spring MVC Framework performs a different role during request handling. A request is handled by components (such as controller, validator, model object, view resolver, and the HandlerMapping interface). The whole task is dependent on these components and provides a clear separation of roles. No need of the duplication of code: In the Spring MVC Framework, we can use the existing business code in any component of the Spring MVC application. Therefore, no duplicity of code arises in a Spring MVC application. Specific validation and binding: Validation errors are displayed when any mismatched data is entered in a form. DispatcherServlet in Spring MVC The DispatcherServlet of the Spring MVC Framework is an implementation of front controller and is a Java Servlet component for Spring MVC applications. DispatcherServlet is a front controller class that receives all incoming HTTP client request for the Spring MVC application. DispatcherServlet is also responsible for initializing the framework components that will be used to process the request at various stages. The following code snippet declares the DispatcherServlet in the web.xml deployment descriptor: <servlet> <servlet-name>SpringDispatcher</servlet-name> <servlet-class>    org.springframework.web.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet>   <servlet-mapping> <servlet-name>SpringDispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> In the preceding code snippet, the user-defined name of the DispatcherServlet class is SpringDispatcher, which is enclosed with the <servlet-name> element. When our newly created SpringDispatcher class is loaded in a web application, it loads an application context from an XML file. DispatcherServlet will try to load the application context from a file named SpringDispatcher-servlet.xml, which will be located in the application's WEB-INF directory: <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context- 3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">   <mvc:annotation-driven />   <context:component-scan base- package="org.packt.Spring.chapter7.springmvc" />   <beanclass="org.springframework.web.servlet.view. InternalResourceViewResolver">    <property name="prefix" value="/WEB-INF/views/" />    <property name="suffix" value=".jsp" /> </bean>   </beans> Spring Security The Spring Security framework is the de facto standard to secure Spring-based applications. The Spring Security framework provides security services for enterprise Java software applications by handling authentication and authorization. The Spring Security framework handles authentication and authorization at the web request and the method invocation level. The two major operations provided by Spring Security are as follows: Authentication: Authentication is the process of assuring that a user is the one who he/she claims to be. It's a combination of identification and verification. The identification process can be performed in a number of different ways, that is, username and password that can be stored in a database, LDAP, or CAS (single sign-out protocol), and so on. Spring Security provides a password encoder interface to make sure that the user's password is hashed. Authorization: Authorization provides access control to an authenticated user. It's the process of assurance that the authenticated user is allowed to access only those resources that he/she is authorized for use. Let's take a look at an example of the HR payroll application, where some parts of the application have access to HR and to some other parts, all the employees have access. The access rights given to user of the system will determine the access rules. In a web-based application, this is often done by URL-based security and is implemented using filters that play an primary role in securing the Spring web application. Sometimes, URL-based security is not enough in web application because URLs can be manipulated and can have relative pass. So, Spring Security also provides method level security. An authorized user will only able to invoke those methods that he is granted access for. Securing web application's URL access HttpServletRequest is the starting point of Java's web application. To configure web security, it's required to set up a filter that provides various security features. In order to enable Spring Security, add filter and their mapping in the web.xml file: <!—Spring Security --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter. DelegatingFilterProxy</filter-class> </filter>   <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Logging in to a web application There are multiple ways supported by Spring security for users to log in to a web application: HTTP basic authentication: This is supported by Spring Security by processing the basic credentials presented in the header of the HTTP request. It's generally used with stateless clients, who on each request pass their credential. Form-based login service: Spring Security supports the form-based login service by providing a default login form page for users to log in to the web application. Logout service: Spring Security supports logout services that allow users to log out of this application. Anonymous login: This service is provided by Spring Security that grants authority to an anonymous user, such as a normal user. Remember-me support: This is also supported by Spring Security and remembers the identity of a user across multiple browser sessions. Encrypting passwords Spring Security supports some hashing algorithms such as MD5 (Md5PasswordEncoder), SHA (ShaPasswordEncoder), and BCrypt (BCryptPasswordEncoder) for password encryption. To enable the password encoder, use the <password-encoder/> element and set the hash attribute, as shown in the following code snippet: <authentication-manager> <authentication-provider>    <password-encoder hash="md5" />    <jdbc-user-service data-source-    ref="dataSource"    . . .   </authentication-provider> </authentication-manager> Mail support in the Spring Framework The Spring Framework provides a simplified API and plug-in for full e-mail support, which minimizes the effect of the underlying e-mailing system specifications. The Sprig e-mail supports provide an abstract, easy, and implementation independent API to send e-mails. The Spring Framework provides an API to simplify the use of the JavaMail API. The classes handle the initialization, cleanup operations, and exceptions. The packages for the JavaMail API provided by the Spring Framework are listed as follows: Package Description org.springframework.mail This defines the basic set of classes and interfaces to send e-mails. org.springframework.mail.java This defines JavaMail API-specific classes and interfaces to send e-mails. Spring's Java Messaging Service (JMS) Java Message Service is a Java Message-oriented middleware (MOM) API responsible for sending messages between two or more clients. JMS is a part of the Java enterprise edition. JMS is a broker similar to a postman who acts like a middleware between the message sender and the receiver. Message is nothing, but just bytes of data or information exchanged between two parties. By taking different specifications, a message can be described in various ways. However, it's nothing, but an entity of communication. A message can be used to transfer a piece of information from one application to another, which may or may not run on the same platform. The JMS application Let's look at the sample JMS application pictorial, as shown in the following diagram: We have a Sender and a Receiver. The Sender is responsible for sending a message and the Receiver is responsible for receiving a message. We need a broker or MOM between the Sender and Receiver, who takes the sender's message and passes it from the network to the receiver. Message oriented middleware (MOM) is basically an MQ application such as ActiveMQ or IBM-MQ, which are two different message providers. The sender promises loose coupling and it can be .NET or mainframe-based application. The receiver can be Java or Spring-based application and it sends back the message to the sender as well. This is a two-way communication, which is loosely coupled. Summary This article covered the architecture of Spring Framework and how to set up the key components of the Spring application development environment. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Serving and processing forms [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 9351

article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 5392

article-image-machine-learning-0
Packt
30 Apr 2015
15 min read
Save for later

Machine Learning

Packt
30 Apr 2015
15 min read
In this article by Alberto Boschetti and Luca Massaron, authors of the book Python Data Science Essentials, you will learn about how to deal with big data and an overview of Stochastic Gradient Descent (SGD). (For more resources related to this topic, see here.) Dealing with big data Big data challenges data science projects with four points of view: volume (data quantity), velocity, variety, and veracity. Scikit-learn package offers a range of classes and functions that will help you effectively work with data so big that it cannot entirely fit in the memory of your computer, no matter what its specifications may be. Before providing you with an overview of big data solutions, we have to create or import some datasets in order to give you a better idea of the scalability and performances of different algorithms. This will require about 1.5 gigabytes of your hard disk, which will be freed after the experiment. Creating some big datasets as examples As a typical example of big data analysis, we will use some textual data from the Internet and we will take advantage of the available fetch_20newsgroups, which contains data of 11,314 posts, each one averaging about 206 words, that appeared in 20 different newsgroups: In: import numpy as np from sklearn.datasets import fetch_20newsgroups newsgroups_dataset = fetch_20newsgroups(shuffle=True, remove=('headers', 'footers', 'quotes'), random_state=6) print 'Posts inside the data: %s' % np.shape(newsgroups_dataset.data) print 'Average number of words for post: %0.0f' % np.mean([len(text.split(' ')) for text in newsgroups_dataset.data]) Out: Posts inside the data: 11314 Average number of words for post: 206 Instead, to work out a generic classification example, we will create three synthetic datasets that contain from 1,00,000 to up to 10 million cases. You can create and use any of them according to your computer resources. We will always refer to the largest one for our experiments: In: from sklearn.datasets import make_classification X,y = make_classification(n_samples=10**5, n_features=5, n_informative=3, random_state=101) D = np.c_[y,X] np.savetxt('huge_dataset_10__5.csv', D, delimiter=",") # the saved file should be around 14,6 MB del(D, X, y) X,y = make_classification(n_samples=10**6, n_features=5, n_informative=3, random_state=101) D = np.c_[y,X] np.savetxt('huge_dataset_10__6.csv', D, delimiter=",") # the saved file should be around 146 MB del(D, X, y) X,y = make_classification(n_samples=10**7, n_features=5, n_informative=3, random_state=101) D = np.c_[y,X] np.savetxt('huge_dataset_10__7.csv', D, delimiter=",") # the saved file should be around 1,46 GB del(D, X, y) After creating and using any of the datasets, you can remove them by the following command: import os os.remove('huge_dataset_10__5.csv') os.remove('huge_dataset_10__6.csv') os.remove('huge_dataset_10__7.csv') Scalability with volume The trick to manage high volumes of data without loading too many megabytes or gigabytes into memory is to incrementally update the parameters of your algorithm until all the observations have been elaborated at least once by the machine learner. This is possible in Scikit-learn thanks to the .partial_fit() method, which has been made available to a certain number of supervised and unsupervised algorithms. Using the .partial_fit() method and providing some basic information (for example, for classification, you should know beforehand the number of classes to be predicted), you can immediately start fitting your model, even if you have a single or a few observations. This method is called incremental learning. The chunks of data that you incrementally fed into the learning algorithm are called batches. The critical points of incremental learning are as follows: Batch size Data preprocessing Number of passes with the same examples Validation and parameters fine tuning Batch size generally depends on your available memory. The principle is that the larger the data chunks, the better, since the data sample will get more representatives of the data distributions as its size gets larger. Also, data preprocessing is challenging. Incremental learning algorithms work well with data in the range of [-1,+1] or [0,+1] (for instance, Multinomial Bayes won't accept negative values). However, to scale into such a precise range, you need to know beforehand the range of each variable. Alternatively, you have to either pass all the data once, and record the minimum and maximum values, or derive them from the first batch, trimming the following observations that exceed the initial maximum and minimum values. The number of passes can become a problem. In fact, as you pass the same examples multiple times, you help the predictive coefficients converge to an optimum solution. If you pass too many of the same observations, the algorithm will tend to overfit, that is, it will adapt too much to the data repeated too many times. Some algorithms, like the SGD family, are also very sensitive to the order that you propose to the examples to be learned. Therefore, you have to either set their shuffle option (shuffle=True) or shuffle the file rows before the learning starts, keeping in mind that for efficacy, the order of the rows proposed for the learning should be casual. Validation is not easy either, especially if you have to validate against unseen chunks, validate in a progressive way, or hold out some observations from every chunk. The latter is also the best way to reserve a sample for grid search or some other optimization. In our example, we entrust the SGDClassifier with a log loss (basically a logistic regression) to learn how to predict a binary outcome given 10**7 observations: In: from sklearn.linear_model import SGDClassifier from sklearn.preprocessing import MinMaxScaler import pandas as pd import numpy as np streaming = pd.read_csv('huge_dataset_10__7.csv', header=None, chunksize=10000) learner = SGDClassifier(loss='log') minmax_scaler = MinMaxScaler(feature_range=(0, 1)) cumulative_accuracy = list() for n,chunk in enumerate(streaming):    if n == 0:            minmax_scaler.fit(chunk.ix[:,1:].values)    X = minmax_scaler.transform(chunk.ix[:,1:].values)    X[X>1] = 1    X[X<0] = 0    y = chunk.ix[:,0]    if n > 8 :        cumulative_accuracy.append(learner.score(X,y))    learner.partial_fit(X,y,classes=np.unique(y)) print 'Progressive validation mean accuracy %0.3f' % np.mean(cumulative_accuracy) Out: Progressive validation mean accuracy 0.660 First, pandas read_csv allows us to iterate over the file by reading batches of 10,000 observations (the number can be increased or decreased according to your computing resources). We use the MinMaxScaler in order to record the range of each variable on the first batch. For the following batches, we will use the rule that if it exceeds one of the limits of [0,+1], they are trimmed to the nearest limit. Eventually, starting from the 10th batch, we will record the accuracy of the learning algorithm on each newly received batch before using it to update the training. In the end, the accumulated accuracy scores are averaged, offering a global performance estimation. Keeping up with velocity There are various algorithms that work with incremental learning. For classification, we will recall the following: sklearn.naive_bayes.MultinomialNB sklearn.naive_bayes.BernoulliNB sklearn.linear_model.Perceptron sklearn.linear_model.SGDClassifier sklearn.linear_model.PassiveAggressiveClassifier For regression, we will recall the following: sklearn.linear_model.SGDRegressor sklearn.linear_model.PassiveAggressiveRegressor As for velocity, they are all comparable in speed. You can try for yourself the following script: In: from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import BernoulliNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.linear_model import PassiveAggressiveClassifier import pandas as pd import numpy as np from datetime import datetime classifiers = { 'SGDClassifier hinge loss' : SGDClassifier(loss='hinge', random_state=101), 'SGDClassifier log loss' : SGDClassifier(loss='log', random_state=101), 'Perceptron' : Perceptron(random_state=101), 'BernoulliNB' : BernoulliNB(), 'PassiveAggressiveClassifier' : PassiveAggressiveClassifier(random_state=101) } huge_dataset = 'huge_dataset_10__6.csv' for algorithm in classifiers:    start = datetime.now()    minmax_scaler = MinMaxScaler(feature_range=(0, 1))    streaming = pd.read_csv(huge_dataset, header=None, chunksize=100)    learner = classifiers[algorithm]    cumulative_accuracy = list()    for n,chunk in enumerate(streaming):        y = chunk.ix[:,0]        X = chunk.ix[:,1:]        if n > 50 :            cumulative_accuracy.append(learner.score(X,y))        learner.partial_fit(X,y,classes=np.unique(y))    elapsed_time = datetime.now() - start    print algorithm + ' : mean accuracy %0.3f in %s secs' % (np.mean(cumulative_accuracy),elapsed_time.total_seconds()) Out: BernoulliNB : mean accuracy 0.734 in 41.101 secs Perceptron : mean accuracy 0.616 in 37.479 secs SGDClassifier hinge loss : mean accuracy 0.712 in 38.43 secs SGDClassifier log loss : mean accuracy 0.716 in 39.618 secs PassiveAggressiveClassifier : mean accuracy 0.625 in 40.622 secs As a general note, remember that smaller batches are slower since that implies more disk access from a database or a file, which is always a bottleneck. Dealing with variety Variety is a big data characteristic. This is especially true when we are dealing with textual data or very large categorical variables (for example, variables storing website names in programmatic advertising). As you will learn from batches of examples, and as you unfold categories or words, each one is an appropriate and exclusive variable. You may find it difficult to handle the challenge of variety and the unpredictability of large streams of data. Scikit-learn package provides you with a simple and fast way to implement the hashing trick and completely forget the problem of defining in advance of a rigid variable structure. The hashing trick uses hash functions and sparse matrices in order to save your time, resources, and hassle. The hash functions are functions that map in a deterministic way any input they receive. It doesn't matter if you feed them with numbers or strings, they will always provide you with an integer number in a certain range. Sparse matrices are, instead, arrays that record only values that are not zero, since their default value is zero for any combination of their row and column. Therefore, the hashing trick bounds every possible input; it doesn't matter if it was previously unseen to a certain range or position on a corresponding input sparse matrix, which is loaded with a value that is not 0. For instance, if your input is Python, a hashing command like abs(hash('Python')) can transform that into the 539294296 integer number and then assign the value of 1 to the cell at the 539294296 column index. The hash function is a very fast and convenient way to always express the same column index given the same input. The using of only absolute values assures that each index corresponds only to a column in our array (negative indexes just start from the last column, and hence in Python, each column of an array can be expressed by both a positive and negative number). The example that follows uses the HashingVectorizer class, a convenient class that automatically takes documents, separates the words, and transforms them, thanks to the hashing trick, into an input matrix. The script aims at learning why posts are published in 20 distinct newsgroups on the basis of the words used on the existing posts in the newsgroups: In: import pandas as pd import numpy as np from sklearn.linear_model import SGDClassifier from sklearn.feature_extraction.text import HashingVectorizer def streaming():    for response, item in zip(newsgroups_dataset.target, newsgroups_dataset.data):        yield response, item hashing_trick = HashingVectorizer(stop_words='english', norm = 'l2', non_negative=True) learner = SGDClassifier(random_state=101) texts = list() targets = list() for n,(target, text) in enumerate(streaming()):    texts.append(text)    targets.append(target)  if n % 1000 == 0 and n >0:        learning_chunk = hashing_trick.transform(texts)        if n > 1000:            last_validation_score = learner.score(learning_chunk, targets),        learner.partial_fit(learning_chunk, targets, classes=[k for k in range(20)])        texts, targets = list(), list() print 'Last validation score: %0.3f' % last_validation_score Out: Last validation score: 0.710 At this point, no matter what text you may input, the predictive algorithm will always answer by pointing out a class. In our case, it points out a newsgroup suitable for the post to appear on. Let's try out this algorithm with a text taken from a classified ad: In: New_text = ['A 2014 red Toyota Prius v Five with fewer than 14K miles. Powered by a reliable 1.8L four cylinder hybrid engine that averages 44mpg in the city and 40mpg on the highway.'] text_vector = hashing_trick.transform(New_text) print np.shape(text_vector), type(text_vector) print 'Predicted newsgroup: %s' % newsgroups_dataset.target_names[learner.predict(text_vector)] Out: (1, 1048576) <class 'scipy.sparse.csr.csr_matrix'> Predicted newsgroup: rec.autos Naturally, you may change the New_text variable and discover where your text will most likely be displayed in a newsgroup. Note that the HashingVectorizer class has transformed the text into a csr_matrix (which is quite an efficient sparse matrix), having about 1 million columns. A quick overview of Stochastic Gradient Descent (SGD) We will close this article on big data with a quick overview of the SGD family comprising of SGDClassifier (for classification) and SGDRegressor (for regression). Like other classifiers, they can be fit by using the .fit() method (passing row by row the in-memory dataset to the learning algorithm) or the previously seen .partial_fit() method based on batches. In the latter case, if you are classifying, you have to declare the predicted classes with the class parameter. It can accept a list containing all the class code that it should expect to meet during the training phase. SGDClassifier can behave as a logistic regression when the loss parameter is set to loss. It transforms into a linear SVC if the loss is set to hinge. It can also take the form of other loss functions or even the loss functions working for regression. SGDRegressor mimics a linear regression using the squared_loss loss parameter. Instead, the huber loss transforms the squared loss into a linear loss over a certain distance epsilon (another parameter to be fixed). It can also act as a linear SVR using the epsilon_insensitive loss function or the slightly different squared_epsilon_insensitive (which penalizes outliers more). As in other situations with machine learning, performance of the different loss functions on your data science problem cannot be estimated a priori. Anyway, please take into account that if you are doing classification and you need an estimation of class probabilities, you will be limited in your choice to log or modified_huber only. Key parameters that require tuning for this algorithm to best work with your data are: n_iter: The number of iterations over the data; the more the passes, the better the optimization of the algorithm. However, there is a higher risk of overfitting. Empirically, SGD tends to converge to a stable solution after having seen 10**6 examples. Given your examples, set your number of iterations accordingly. penalty : You have to choose l1, l2, or elasticnet, which are all different regularization strategies, in order to avoid overfitting because of overparametrization (using too many unnecessary parameters leads to the memorization of observations more than the learning of patterns). Briefly, l1 tends to reduce unhelpful coefficients to zero, l2 just attenuates them, and elasticnet is a mix of l1 and l2 strategies. alpha: This is a multiplier of the regularization term; the higher the alpha, the more the regularization. We advise you to find the best alpha value by performing a grid search ranging from 10**-7 to 10**-1. l1_ratio: The l1 ratio is used for elastic net penalty. learning_rate: This sets how much the coefficients are affected by every single example. Usually, it is optimal for classifiers and invscaling for regression. If you want to use invscaling for classification, you'll have to set eta0 and power_t (invscaling = eta0 / (t**power_t)). With invscaling, you can start with a lower learning rate, which is less than the optimal rate, though it will decrease slower. epsilon: This should be used if your loss is huber, epsilon_insensitive, or squared_epsilon_insensitive. shuffle: If this is True, the algorithm will shuffle the order of the training data in order to improve the generalization of the learning. Summary In this article, we introduced the essentials of machine learning. We discussed about creating some big data examples, scalability with volume, keeping up with velocity, dealing with variety and with advanced one SGD. Resources for Article: Further resources on this subject: Supervised learning [article] Installing NumPy, SciPy, matplotlib, and IPython [article] Driving Visual Analyses with Automobile Data (Python) [article]
Read more
  • 0
  • 0
  • 2629

article-image-extracting-data-physically-dd
Packt
30 Apr 2015
10 min read
Save for later

Extracting data physically with dd

Packt
30 Apr 2015
10 min read
 In this article by Rohit Tamma and Donnie Tindall, authors of the book Learning Android Forensics, we will cover physical data extraction using free and open source tools wherever possible. The majority of the material covered in this article will use the ADB methods. (For more resources related to this topic, see here.) The dd command should be familiar to any examiner who has done traditional hard drive forensics. The dd command is a Linux command-line utility used by definition to convert and copy files, but is frequently used in forensics to create bit-by-bit images of entire drives. Many variations of the dd commands also exist and are commonly used, such as dcfldd, dc3dd, ddrescue, and dd_rescue. As the dd command is built for Linux-based systems, it is frequently included on Android platforms. This means that a method for creating an image of the device often already exists on the device! The dd command has many options that can be set, of which only forensically important options are listed here. The format of the dd command is as follows: dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 conv=notrunc,noerror,sync if: This option specifies the path of the input file to read from. of: This option specifies the path of the output file to write to. bs: This option specifies the block size. Data is read and written in the size of the block specified, defaults to 512 bytes if not specified. conv: This option specifies the conversion options as its attributes: notrunc: This option does not truncate the output file. noerror: This option continues imaging if an error is encountered. sync: In conjunction with the noerror option, this option writes x00 for blocks with an error. This is important for maintaining file offsets within the image. Do not mix up the if and of flags, this could result in overwriting the target device! A full list of command options can be found at http://man7.org/linux/man-pages/man1/dd.1.html. Note that there is an important correlation between the block size and the noerror and sync flags: if an error is encountered, x00 will be written for the entire block that was read (as determined by the block size). Thus, smaller block sizes result in less data being missed in the event of an error. The downside is that, typically, smaller block sizes result in a slower transfer rate. An examiner will have to decide whether a timely or more accurate acquisition is preferred. Booting into recovery mode for the imaging process is the most forensically sound method. Determining what to image When imaging a computer, an examiner must first find what the drive is mounted as; /dev/sda, for example. The same is true when imaging an Android device. The first step is to launch the ADB shell and view the /proc/partitions file using the following command: cat /proc/partitions The output will show all partitions on the device: In the output shown in the preceding screenshot, mmcblk0 is the entirety of the flash memory on the device. To image the entire flash memory, we could use /dev/blk/mmcblk0 as the input file flag (if) for the dd command. Everything following it, indicated by p1- p29, is a partition of the flash memory. The size is shown in blocks, in this case the block size is 1024 bytes for a total internal storage size of approximately 32 GB. To obtain a full image of the device's internal memory, we would run the dd command with mmcblk0 as the input file. However, we know that most of these partitions are unlikely to be forensically interesting; we're most likely only interested in a few of them. To view the corresponding names for each partition, we can look in the device's by-name directory. This does not exist on every device, and is sometimes in a different path, but for this device it is found at /dev/block/msm_sdcc.1/by-name. By navigating to that directory and running the ls -al command, we can see to where each block is symbolically linked as shown in the following screenshot: If our investigation was only interested in the userdata partition, we now know that it is mmcblk0p28, and could use that as the input file to the dd command. If the by-name directory does not exist on the device, it may not be possible to identify every partition on the device. However, many of them can still be found by using the mount command within the ADB shell. Note that the following screenshot is from a different device that does not contain a by-name directory, so the data partition is not mmcblk0p28: On this device, the data partition is mmcblk0p34. If the mount command does not work, the same information can be found using the cat /proc/mounts command. Other options to identify partitions depending on the device are the cat /proc/mtd or cat /proc/yaffs commands; these may work on older devices. Newer devices may include an fstab file in the root directory (typically called fstab.<device>) that will list mountable partitions. Writing to an SD card The output file of the dd command can be written to the device's SD card. This should only be done if the suspect SD card can be removed and replaced with a forensically sterile SD to ensure that the dd command's output is not overwriting evidence. Obviously, if writing to an SD card, ensure that the SD card is larger than the partition being imaged. On newer devices, the /sdcard partition is actually a symbolic link to /data/media. In this case, using the dd command to copy the /data partition to the SD card won't work, and could corrupt the device because the input file is essentially being written to itself. To determine where the SD card is symbolically linked to, simply open the ADB shell and run the ls -al command. If the SD card partition is not shown, the SD likely needs to be mounted in recovery mode using the steps shown in this article, Extracting Data Logically from Android Devices. In the following example, /sdcard is symbolically linked to /data/media. This indicates that the dd command's output should not be written to the SD card. In the example that follows, the /sdcard is not a symbolic link to /data, so the dd command's output can be used to write the /data partition image to the SD card: On older devices, the SD card may not even be symbolically linked. After determining which block to read and to where the SD card is symbolically linked, image the /data partition to the /sdcard, using the following command: dd if=/dev/block/mmcblk0p28 of=/sdcard/data.img bs=512 conv=notrunc,noerror,sync Now, an image of the /data partition exists on the SD card. It can be pulled to the examiner's machine with the ADB pull command, or simply read from the SD card. Writing directly to an examiner's computer with netcat If the image cannot be written to the SD card, an examiner can use netcat to write the image directly to their machine. The netcat tool is a Linux-based tool used for transferring data over a network connection. We recommend using a Linux or a Mac computer for using netcat as it is built-in, though Windows versions do exist. The examples below were done on a Mac. Installing netcat on the device Very few Android devices, if any, come with netcat installed. To check, simply open the ADB shell and type nc. If it returns saying nc is not found, netcat will have to be installed manually on the device. Netcat compiled for Android can be found at many places online. We have shared the version we used at http://sourceforge.net/projects/androidforensics-netcat/files/. If we look back at the results from our mount command in the previous section, we can see that the /dev partition is mounted as tmpfs. The Linux term tmpfs means that the partition is meant to appear as an actual filesystem on the device, but is truly only stored in RAM. This means we can push netcat here without making any permanent changes to the device using the following command on the examiner's computer: adb push nc /dev/Examiner_Folder/nc The command should have created the Examiner_Folder in /dev, and nc should be in it. This can be verified by running the following command in the ADB shell: ls /dev/Examiner_Folder Using netcat Now that the netcat binary is on the device, we need to give it permission to execute from the ADB shell. This can be done as follows: chomd +x /dev/Examiner_Folder/nc We will need two terminal windows open with the ADB shell open in one of them. The other will be used to listen to the data being sent from the device. Now we need to enable port forwarding over ADB from the examiner's computer: adb forward tcp:9999 tcp:9999 9999 is the port we chose to use for netcat; it can be any arbitrary port number between 1023 and 65535 on a Linux or Mac system (1023 and below are reserved for system processes, and require root permission to use). Windows will allow any port to be assigned. In the terminal window with ADB shell, run the following command: dd if=/dev/block/mmcblk0p34 bs=512 conv=notrunc,noerror,sync | /dev/Examiner_Folder/nc –l –p 9999 mmcblk0p34 is the user data partition on this device, however, the entire flash memory or any other partition could also be imaged with this method. In most cases, it is best practice to image the entirety of the flash memory in order to acquire all possible data from the device. Some commercial forensic tools may also require the entire memory image, and may not properly handle an image of a single partition. In the other terminal window, run: nc 127.0.0.1 9999 > data_partition.img The data_partition.img file should now be created in the current directory of the examiner's computer. When the data is finished transferring, netcat in both terminals will terminate and return to the command prompt. The process can take a significant amount of time depending on the size of the image. Summary This article discussed techniques used for physically imaging internal memory or SD cards and some of the common problems associated with dd are as follows: Usually pre-installed on device May not work on MTD blocks Does not obtain Out-of-Band area Additionally, each imaging technique can be used to either save the image on the device (typically on the SD card), or used with netcat to write the file to the examiner's computer: Writing to SD card: Easy, doesn't require additional binaries to be pushed to the device Familiar to most examiners Cannot be used if SD card is symbolically linked to the partition being imaged Cannot be used if the entire memory is being imaged Using netcat: Usually requires yet another binary to be pushed to the device Somewhat complicated, must follow steps exactly Works no matter what is being imaged May be more time consuming than writing to the SD Resources for Article: Further resources on this subject: Reversing Android Applications [article] Introduction to Mobile Forensics [article] Processing the Case [article]
Read more
  • 0
  • 0
  • 43871

article-image-how-to-integrate-social-media-into-wordpress-website
Packt
29 Apr 2015
6 min read
Save for later

How to integrate social media with your WordPress website

Packt
29 Apr 2015
6 min read
In this article by Karol Krol, the author of the WordPress 4.x Complete, we will look at how we can integrate our website with social media. We will list some more ways in which you can make your site social media friendly, and also see why you'd want to do that in the first place. Let's start with the why. In this day and age, social media is one of the main drivers of traffic for many sites. Even if you just want to share your content with friends and family, or you have some serious business plans regarding your site, you need to have at least some level of social media integration. Even if you install just simple social media share buttons, you will effectively encourage your visitors to pass on your content to their followers, thus expanding your reach and making your content more popular. (For more resources related to this topic, see here.) Making your blog social media friendly There are a handful of ways to make your site social media friendly. The most common approaches are as follows: Social media share buttons, which allow your visitors to share your content with their friends and followers Social media APIs integration, which make your content look better on social media (design wise) Automatic content distribution to social media Social media metrics tracking Let's discuss these one by one. Setting up social media share buttons There are hundreds of social media plugins available out there that allow you to display a basic set of social media buttons on your site. The one I advise you to use is called Social Share Starter (http://bit.ly/sss-plugin). Its main advantage is that it's optimized to work on new and low-traffic sites, and doesn't show any negative social proof when displaying the buttons and their share numbers. Setting up social media APIs' integration The next step worth taking to make your content appear more attractive on social media is to integrate it with some social media APIs; particularly that of Twitter. What exactly their API is and how it works isn't very relevant for the WordPress discussion we're having here. So instead, let's just focus on what the outcome of integrating your site with this API is. Here's what a standard tweet mentioning a website usually looks like (please notice the overall design, not the text contents): Here's a different tweet, mentioning an article from a site that has Twitter's (Twitter Cards) API enabled: This looks much better. Luckily, having this level of Twitter integration is quite easy. All you need is a plugin called JM Twitter Cards (available at https://wordpress.org/plugins/jm-twitter-cards/). After installing and activating it, you will be guided through the process of setting everything up and approving your site with Twitter (mandatory step). Setting up automatic content distribution to social media The idea behind automatic social media distribution of your content is that you don't have to remember to do so manually whenever you publish a new post. Instead of copying and pasting the URL address of your new post by hand to each individual social media platform, you can have this done automatically. This can be done in many ways, but let's discuss the two most usable ones, the Jetpack and Revive Old Post plugins. The Jetpack plugin The Jetpack plugin is available at https://wordpress.org/plugins/jetpack/. One of Jetpack's modules is called Publicize. You can activate it by navigating to the Jetpack | Settings section of the wp-admin. After doing so, you will be able to go to Settings | Sharing and integrate your site with one of the six available social media platforms: After going through the process of authorizing the plugin with each service, your site will be fully capable of posting each of your new posts to social media automatically. The Revive Old Post plugin The Revive Old Post plugin is available at https://revive.social/plugins/revive-old-post. While the Jetpack plugin takes the newest posts on your site and distributes them to your various social media accounts, the Revive Old Post plugin does the same with your archived posts, ultimately giving them a new life. Hence the name Revive Old Post. After downloading and activating this plugin, go to its section in the wp-admin Revive Old Post. Then, switch to the Accounts tab. There, you can enable the plugin to work with your social media accounts by clicking on the authorization buttons: Then, go to the General settings tab and handle the time intervals and other details of how you want the plugin to work with your social media accounts. When you're done, just click on the SAVE button. At this point, the plugin will start operating automatically and distribute your random archived posts to your social media accounts. Note that it's probably a good idea not to share things too often if you don't want to anger your followers and make them unfollow you. For that reason, I wouldn't advise posting more than once a day. Setting up social media metrics tracking The final element in our social media integration puzzle is setting up some kind of tracking mechanism that would tell us how popular our content is on social media (in terms of shares). Granted, you can do this manually by going to each of your posts and checking their share numbers individually (provided you have the Social Share Starter plugin installed). However, there's a quicker method, and it involves another plugin. This one is called Social Metrics Tracker and you can get it at https://wordpress.org/plugins/social-metrics-tracker/. In short, this plugin collects social share data from a number of platforms and then displays them to you in a single readable dashboard view. After you install and activate the plugin, you'll need to give it a couple of minutes for it to crawl through your social media accounts and get the data. Soon after that, you will be able to visit the plugin's dashboard by going to the Social Metrics section in the wp-admin: For some webhosts and setups, this plugin might end up consuming too much of the server's resources. If this happens, consider activating it only occasionally to check your results and then deactivate it again. Doing this even once a week will still give you a great overview of how well your content is performing on social media. This closes our short guide on how to integrate your WordPress site with social media. I'll admit that we're just scratching the surface here and that there's a lot more that can be done. There are new social media plugins being released literally every week. That being said, the methods described here are more than enough to make your WordPress site social media friendly and enable you to share your content effectively with your friends, family, and audience. Summary Here, we talked about social media integration, tools, and plugins that can make your life a lot easier as an online content publisher. Resources for Article: Further resources on this subject: FAQs on WordPress 3 [article] Creating Blog Content in WordPress [article] Customizing WordPress Settings for SEO [article]
Read more
  • 0
  • 0
  • 5233

article-image-auto-updating-child-records-process-builder
Packt
29 Apr 2015
5 min read
Save for later

Auto updating child records in Process Builder

Packt
29 Apr 2015
5 min read
In this article by Rakesh Gupta, the author of the book Learning Salesforce Visual Workflow, we will discuss how to auto update child records using Process Builder of Salesforce. There are several business use cases where a customer wants to update child records based on some criteria, for example, auto-updating all related Opportunity to Closed-Lost if an account is updated to Inactive. To achieve these types of business requirements, you can use the Apex trigger. You can also achieve these types of requirements using the following methods: Process Builder A combination of Flow and Process Builder A combination of Flow and Inline Visualforce page on the account detail page (For more resources related to this topic, see here.) We will use Process Builder to solve these types of business requirements. Let's start with a business requirement. Here is a business scenario: Alice Atwood is working as a system administrator in Universal Container. She has received a requirement that once an account gets activated, the account phone must be synced with the related contact asst. phone field. This means whenever an account phone fields gets updated, the same phone number will be copied to the related contacts asst. phone field. Follow these instructions to achieve the preceding requirement using Process Builder: First of all, navigate to Setup | Build | Customize | Accounts | Fields and make sure that the Active picklist is available in your Salesforce organization. If it's not available, create a custom Picklist field with the name as Active, and enter the Yes and No values. To create a Process, navigate to Setup | Build | Create | Workflow & Approvals | Process Builder, click on New Button, and enter the following details: Name: Enter the name of the Process. Enter Update Contacts Asst Phone in Name. This must be within 255 characters. API Name: This will be autopopulated based on the name. Description: Write some meaningful text so that other developers or administrators can easily understand why this Process is created. The properties window will appear as shown in the following screenshot: Once you are done, click on the Save button. It will redirect you to the Process canvas, which allows you to create or modify the Process. After Define Process Properties, the next task is to select the object on which you want to create a Process and define the evaluation criteria. For this, click on the Add Object node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Object: Start typing and then select the Account object. Start the process: For Start the process, select when a record is created or edited. This means the Process will fire every time, irrespective of record creation or updating. Allow process to evaluate a record multiple times in a single transaction?: Select this checkbox only when you want the Process to evaluate the same record up to five times in a single transaction. It might re-examine the record because a Process, Workflow Rule, or Flow may have updated the record in the same transaction. In this case, leave this unchecked. This window will appear as shown in the following screenshot: Once you are done with adding the Process criteria, click on the Save button. Similar to the Workflow Rule, once you save the panel, it doesn't allow you to change the selected object. After defining the evaluation criteria, the next step is to add the Process criteria. Once the Process criteria are true, only then will the Process execute the associated actions. To define the Process criteria, click on the Add Criteria node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Criteria Name: Enter a name for the criteria node. Enter Update Contacts in Criteria Name. Criteria for Executing Actions: Select the type of criteria you want to define. You can use either a formula or a filter to define the Process criteria or no criteria. In this case, select Active equals to Yes. This means the Process will fire only when the account is active. This window will appear as shown in the following screenshot: Once you are done with defining the Process criteria, click on the Save button. Once you are done with the Process criteria node, the next step is to add an immediate action to update the related contact's asst. phone field. For this, we will use the Update Records action available under Process. Click on Add Action available under IMMEDIATE ACTIONS. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Action Type: Select the type of action. In this case, select Update Records. Action Name: Enter a name for this action. Enter Update Assts Phone in Action Name. Object: Start typing and then select the [Account].Contacts object. Field: Map the Asst. Phone field with the [Account]. Phone field. To select the fields, you can use field picker. To enter the value, use the text entry field. It will appear as shown in the following screenshot: Once you are done, click on the Save button. Once you are done with the immediate action, the final step is to activate it. To activate a Process, click on the Activate button available on the button bar. From now on, if you try to update an active account, Process will automatically update the related contact's asst. phone with the value available in the account phone field. Summary In this article, we have learned the technique of auto updating records in Process Builder. Resources for Article: Further resources on this subject: Visualforce Development with Apex [Article] Configuration in Salesforce CRM [Article] Introducing Salesforce Chatter [Article]
Read more
  • 0
  • 0
  • 8135
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-patterns-data-processing
Packt
29 Apr 2015
12 min read
Save for later

Patterns for Data Processing

Packt
29 Apr 2015
12 min read
In this article by Marcus Young, the author of the book Implementing Cloud Design Patterns for AWS, we will cover the following patterns: Queuing chain pattern Job observer pattern (For more resources related to this topic, see here.) Queuing chain pattern In the queuing chain pattern, we will use a type of publish-subscribe model (pub-sub) with an instance that generates work asynchronously, for another server to pick it up and work with. This is described in the following diagram: The diagram describes the scenario we will solve, which is solving fibonacci numbers asynchronously. We will spin up a Creator server that will generate random integers, and publish them into an SQS queue myinstance-tosolve. We will then spin up a second instance that continuously attempts to grab a message from the queue myinstance-tosolve, solves the fibonacci sequence of the numbers contained in the message body, and stores that as a new message in the myinstance-solved queue. Information on the fibonacci algorithm can be found at http://en.wikipedia.org/wiki/Fibonacci_number. This scenario is very basic as it is the core of the microservices architectural model. In this scenario, we could add as many worker servers as we see fit with no change to infrastructure, which is the real power of the microservices model. The first thing we will do is create a new SQS queue. From the SQS console select Create New Queue. From the Create New Queue dialog, enter myinstance-tosolve into the Queue Name text box and select Create Queue. This will create the queue and bring you back to the main SQS console where you can view the queues created. Repeat this process, entering myinstance-solved for the second queue name. When complete, the SQS console should list both the queues. In the following code snippets, you will need the URL for the queues. You can retrieve them from the SQS console by selecting the appropriate queue, which will bring up an information box. The queue URL is listed as URL in the following screenshot: Next, we will launch a creator instance, which will create random integers and write them into the myinstance-tosolve queue via its URL noted previously. From the EC2 console, spin up an instance as per your environment from the AWS Linux AMI. Once it is ready, SSH into it (note that acctarn, mykey, and mysecret need to be replaced with your actual credentials): [ec2-user@ip-10-203-10-170 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-170 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-170 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --message-body ${value} >/dev/null 2>&1done Once the snippet completes, we should have 100 messages in the myinstance-tosolve queue, ready to be retrieved. Now that those messages are ready to be picked up and solved, we will spin up a new EC2 instance: again as per your environment from the AWS Linux AMI. Once it is ready, SSH into it (note that acctarn, mykey, and mysecret need to be valid and set to your credentials): [ec2-user@ip-10-203-10-169 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-169 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-169 ~]$ sudo yum install -y ruby-devel gcc >/dev/null 2>&1[ec2-user@ip-10-203-10-169 ~]$ sudo gem install json >/dev/null 2>&1[ec2-user@ip-10-203-10-169 ~]$ cat <<EOF | sudo tee -a /usr/local/bin/fibsqs >/dev/null 2>&1#!/bin/shwhile [ true ]; dofunction fibonacci {a=1b=1i=0while [ $i -lt $1 ]doprintf "%dn" $alet sum=$a+$blet a=$blet b=$sumlet i=$i+1done}message=$(aws sqs receive-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve)if [[ -n $message ]]; thenbody=$(echo $message | ruby -e "require 'json'; p JSON.parse(gets)['Messages'][0]['Body']" | sed 's/"//g')handle=$(echo $message | ruby -e "require 'json'; p JSON.parse(gets)['Messages'][0]['ReceiptHandle']" | sed 's/"//g')aws sqs delete-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --receipt-handle $handleecho "Solving '${body}'."solved=$(fibonacci $body)parsed_solve=$(echo $solved | sed 's/n/ /g')echo "'${body}' solved."aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-solved --message-body "${parsed_solve}"fisleep 1doneEOF[ec2-user@ip-10-203-10-169 ~]$ sudo chown ec2-user:ec2-user /usr/local/bin/fibsqs && chmod +x /usr/local/bin/fibsqs There will be no output from this code snippet yet, so now let's run the fibsqs command we created. This will continuously poll the myinstance-tosolve queue, solve the fibonacci sequence for the integer, and store it into the myinstance-solved queue: [ec2-user@ip-10-203-10-169 ~]$ fibsqsSolving '48'.'48' solved.{"MD5OfMessageBody": "73237e3b7f7f3491de08c69f717f59e6","MessageId": "a249b392-0477-4afa-b28c-910233e7090f"}Solving '6'.'6' solved.{"MD5OfMessageBody": "620b0dd23c3dddbac7cce1a0d1c8165b","MessageId": "9e29f847-d087-42a4-8985-690c420ce998"} While this is running, we can verify the movement of messages from the tosolve queue into the solved queue by viewing the Messages Available column in the SQS console. This means that the worker virtual machine is in fact doing work, but we can prove that it is working correctly by viewing the messages in the myinstance-solved queue. To view messages, right click on the myinstance-solved queue and select View/Delete Messages. If this is your first time viewing messages in SQS, you will receive a warning box that displays the impact of viewing messages in a queue. Select Start polling for Messages. From the View/Delete Messages in myinstance-solved dialog, select Start Polling for Messages. We can now see that we are in fact working from a queue. Job observer pattern The previous two patterns show a very basic understanding of passing messages around a complex system, so that components (machines) can work independently from each other. While they are a good starting place, the system as a whole could improve if it were more autonomous. Given the previous example, we could very easily duplicate the worker instance if either one of the SQS queues grew large, but using the Amazon-provided CloudWatch service we can automate this process. Using CloudWatch, we might end up with a system that resembles the following diagram: For this pattern, we will not start from scratch but directly from the previous priority queuing pattern. The major difference between the previous diagram and the diagram displayed in the priority queuing pattern is the addition of a CloudWatch alarm on the myinstance-tosolve-priority queue, and the addition of an auto scaling group for the worker instances. The behavior of this pattern is that we will define a depth for our priority queue that we deem too high, and create an alarm for that threshold. If the number of messages in that queue goes beyond that point, it will notify the auto scaling group to spin up an instance. When the alarm goes back to OK, meaning that the number of messages is below the threshold, it will scale down as much as our auto scaling policy allows. Before we start, make sure any worker instances are terminated. The first thing we should do is create an alarm. From the CloudWatch console in AWS, click Alarms on the side bar and select Create Alarm. From the new Create Alarm dialog, select Queue Metrics under SQS Metrics. This will bring us to a Select Metric section. Type myinstance-tosolve-priority ApproximateNumberOfMessagesVisible into the search box and hit Enter. Select the checkbox for the only row and select Next. From the Define Alarm, make the following changes and then select Create Alarm: In the Name textbox, give the alarm a unique name. In the Description textbox, give the alarm a general description. In the Whenever section, set 0 to 1. In the Actions section, click Delete for the only Notification. In the Period drop-down, select 1 Minute. In the Statistic drop-down, select Sum. Now that we have our alarm in place, we need to create a launch configuration and auto scaling group that refers this alarm. Create a new launch configuration from the AWS Linux AMI with details as per your environment. However, set the user data to (note that acctarn, mykey, and mysecret need to be valid): #!/bin/bash[[ -d /home/ec2-user/.aws ]] && rm -rf /home/ec2-user/.aws/config ||mkdir /home/ec2-user/.awsecho $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > /home/ec2-user/.aws/configchown ec2-user:ec2-user /home/ec2-user/.aws -Rcat <<EOF >/usr/local/bin/fibsqs#!/bin/shfunction fibonacci {a=1b=1i=0while [ $i -lt $1 ]doprintf "%dn" $alet sum=$a+$blet a=$blet b=$sumlet i=$i+1done}number="$1"solved=$(fibonacci $number)parsed_solve=$(echo $solved | sed 's/n/ /g')aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-solved --message-body "${parsed_solve}"exit 0EOFchown ec2-user:ec2-user /usr/local/bin/fibsqschmod +x /usr/local/bin/fibsqsyum install -y libxml2 libxml2-devel libxslt libxslt-devel gcc ruby-devel>/dev/null 2>&1gem install nokogiri -- --use-system-libraries >/dev/null 2>&1gem install shoryuken >/dev/null 2>&1cat <<EOF >/home/ec2-user/config.ymlaws:access_key_id: mykeysecret_access_key: mysecretregion: us-east-1 # or <%= ENV['AWS_REGION'] %>receive_message:attributes:- receive_count- sent_atconcurrency: 25, # The number of allocated threads to process messages.Default 25delay: 25, # The delay in seconds to pause a queue when it'sempty. Default 0queues:- [myinstance-tosolve-priority, 2]- [myinstance-tosolve, 1]EOFcat <<EOF >/home/ec2-user/worker.rbclass MyWorkerinclude Shoryuken::Workershoryuken_options queue: 'myinstance-tosolve', auto_delete: truedef perform(sqs_msg, body)puts "normal: #{body}"%x[/usr/local/bin/fibsqs #{body}]endendclass MyFastWorkerinclude Shoryuken::Workershoryuken_options queue: 'myinstance-tosolve-priority', auto_delete:truedef perform(sqs_msg, body)puts "priority: #{body}"%x[/usr/local/bin/fibsqs #{body}]endendEOFchown ec2-user:ec2-user /home/ec2-user/worker.rb /home/ec2-user/config.ymlscreen -dm su - ec2-user -c 'shoryuken -r /home/ec2-user/worker.rb -C /home/ec2-user/config.yml' Next, create an auto scaling group that uses the launch configuration we just created. The rest of the details for the auto scaling group are as per your environment. However, set it to start with 0 instances and do not set it to receive traffic from a load balancer. Once the auto scaling group has been created, select it from the EC2 console and select Scaling Policies. From here, click Add Policy to create a policy similar to the one shown in the following screenshot and click Create: Next, we get to trigger the alarm. To do this, we will again submit random numbers into both the myinstance-tosolve and myinstance-tosolve-priority queues: [ec2-user@ip-10-203-10-79 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-79 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-79 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --message-body ${value} >/dev/null 2>&1done[ec2-user@ip-10-203-10-79 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolvepriority--message-body ${value} >/dev/null 2>&1done After five minutes, the alarm will go into effect and our auto scaling group will launch an instance to respond to it. This can be viewed from the Scaling History tab for the auto scaling group in the EC2 console. Even though our alarm is set to trigger after one minute, CloudWatch only updates in intervals of five minutes. This is why our wait time was not as short as our alarm. Our auto scaling group has now responded to the alarm by launching an instance. Launching an instance by itself will not resolve this, but using the user data from the Launch Configuration, it should configure itself to clear out the queue, solve the fibonacci of the message, and finally submit it to the myinstance-solved queue. If this is successful, our myinstance-tosolve-priority queue should get emptied out. We can verify from the SQS console as before. And finally, our alarm in CloudWatch is back to an OK status. We are now stuck with the instance because we have not set any decrease policy. I won't cover this in detail, but to set it, we would create a new alarm that triggers when the message count is a lower number such as 0, and set the auto scaling group to decrease the instance count when that alarm is triggered. This would allow us to scale out when we are over the threshold, and scale in when we are under the threshold. This completes the final pattern for data processing. Summary In this article, in the queuing chain pattern, we walked through creating independent systems that use the Amazon-provided SQS service that solve fibonacci numbers without interacting with each other directly. Then, we took the topic even deeper in the job observer pattern, and covered how to tie in auto scaling policies and alarms from the CloudWatch service to scale out when the priority queue gets too deep. Resources for Article: Further resources on this subject: In the Cloud [Article] Mobile Administration [Article] Testing Your Recipes and Getting Started with ChefSpec [Article]
Read more
  • 0
  • 0
  • 12663

article-image-algorithmic-trading
Packt
29 Apr 2015
16 min read
Save for later

Algorithmic Trading

Packt
29 Apr 2015
16 min read
In this article by James Ma Weiming, author of the book Mastering Python for Finance , we will see how algorithmic trading automates the systematic trading process, where orders are executed at the best price possible based on a variety of factors, such as pricing, timing, and volume. Some brokerage firms may offer an application programming interface (API) as part of their service offering to customers who wish to deploy their own trading algorithms. For developing an algorithmic trading system, it must be highly robust and handle any point of failure during the order execution. Network configuration, hardware, memory management and speed, and user experience are some factors to be considered when designing a system in executing orders. Designing larger systems inevitably add complexity to the framework. As soon as a position in a market is opened, it is subjected to various types of risk, such as market risk. To preserve the trading capital as much as possible, it is important to incorporate risk management measures to the trading system. Perhaps the most common risk measure used in the financial industry is the value-at-risk (VaR) technique. We will discuss the beauty and flaws of VaR, and how it can be incorporated into our trading system that we will develop in this article. In this article, we will cover the following topics: An overview of algorithmic trading List of brokers and system vendors with public API Choosing a programming language for a trading system Setting up API access on Interactive Brokers (IB) trading platform Using the IbPy module to interact with IB Trader WorkStation (TWS) Introduction to algorithmic trading In the 1990s, exchanges had already begun to use electronic trading systems. By 1997, 44 exchanges worldwide used automated systems for trading futures and options with more exchanges in the process of developing automated technology. Exchanges such as the Chicago Board of Trade (CBOT) and the London International Financial Futures and Options Exchange (LIFFE) used their electronic trading systems as an after-hours complement to traditional open outcry trading in pits, giving traders 24-hour access to the exchange's risk management tools. With improvements in technology, technology-based trading became less expensive, fueling the growth of trading platforms that are faster and powerful. Higher reliability of order execution and lower rates of message transmission error deepened the reliance of technology by financial institutions. The majority of asset managers, proprietary traders, and market makers have since moved from the trading pits to electronic trading floors. As systematic or computerized trading became more commonplace, speed became the most important factor in determining the outcome of a trade. Quants utilizing sophisticated fundamental models are able to recompute fair values of trading products on the fly and execute trading decisions, enabling them to reap profits at the expense of fundamental traders using traditional tools. This gave way to the term high-frequency trading (HFT) that relies on fast computers to execute the trading decisions before anyone else can. HFT has evolved into a billion-dollar industry. Algorithmic trading refers to the automation of the systematic trading process, where the order execution is heavily optimized to give the best price possible. It is not part of the portfolio allocation process. Banks, hedge funds, brokerage firms, clearing firms, and trading firms typically have their servers placed right next to the electronic exchange to receive the latest market prices and to perform the fastest order execution where possible. They bring enormous trading volumes to the exchange. Anyone who wishes to participate in low-latency, high-volume trading activities, such as complex event processing or capturing fleeting price discrepancies, by acquiring exchange connectivity may do so in the form of co-location, where his or her server hardware can be placed on a rack right next to the exchange for a fee. The Financial Information Exchange (FIX) protocol is the industry standard for electronic communications with the exchange from the private server for direct market access (DMA) to real-time information. C++ is the common choice of programming language for trading over the FIX protocol, though other languages, such as .NET framework common language and Java can be used. Before creating an algorithmic trading platform, you would need to assess various factors, such as speed and ease of learning before deciding on a specific language for the purpose. Brokerage firms would provide a trading platform of some sort to their customers for them to execute orders on selected exchanges in return for the commission fees. Some brokerage firms may offer an API as part of their service offering to technically inclined customers who wish to run their own trading algorithms. In most circumstances, customers may also choose from a number of commercial trading platforms offered by third-party vendors. Some of these trading platforms may also offer API access to route orders electronically to the exchange. It is important to read the API documentation beforehand to understand the technical capabilities offered by your broker and to formulate an approach in developing an algorithmic trading system. List of trading platforms with public API The following table lists some brokers and trading platform vendors who have their API documentation publicly available: Broker/vendor URL Programming languages supported Interactive Brokers https://www.interactivebrokers.com/en/index.php?f=1325 C++, Posix C++, Java, and Visual Basic for ActiveX E*Trade https://developer.etrade.com Java, PHP, and C++ IG http://labs.ig.com/ REST, Java, FIX, and Microsoft .NET Framework 4.0 Tradier https://developer.tradier.com Java, Perl, Python, and Ruby TradeKing https://developers.tradeking.com Java, Node.js, PHP, R, and Ruby Cunningham trading systems http://www.ctsfutures.com/wiki/T4%20API%2040.MainPage.ashx Microsoft .NET Framework 4.0 CQG http://cqg.com/Products/CQG-API.aspx C#, C++, Excel, MATLAB, and VB.NET Trading technologies https://developer.tradingtechnologies.com Microsoft .NET Framework 4.0 OANDA http://developer.oanda.com REST, Java, FIX, and MT4 Which is the best programming language to use? With many choices of programming languages available to interface with brokers or vendors, the question that comes naturally to anyone starting out in algorithmic trading platform development is: which language should I use? Well, the short answer is that there is really no best programming language. How your product will be developed, the performance metrics to follow, the costs involved, latency threshold, risk measures, and the expected user interface are pieces of the puzzle to be taken into consideration. The risk manager, execution engine, and portfolio optimizer are some major components that will affect the design of your system. Your existing trading infrastructure, choice of operating system, programming language compiler capability, and available software tools poses further constraints on the system design, development, and deployment. System functionalities It is important to define the outcomes of your trading system. An outcome could be a research-based system that might be more concerned with obtaining high-quality data from data vendors, performing computations or running models, and evaluating a strategy through signal generation. Part of the research component might include a data-cleaning module or a backtesting interface to run a strategy with theoretical parameters over historical data. The CPU speed, memory size, and bandwidth are factors to be considered while designing our system. Another outcome could be an execution-based system that is more concerned with risk management and order handling features to ensure timely execution of multiple orders. The system must be highly robust and handle any point of failure during the order execution. As such, network configuration, hardware, memory management and speed, and user experience are some factors to be considered when designing a system in executing orders. A system may contain one or more of these functionalities. Designing larger systems inevitably add complexity to the framework. It is recommended that you choose one or more programming languages that can address and balance the development speed, ease of development, scalability, and reliability of your trading system. Algorithmic trading with Interactive Brokers and IbPy In this section, we will build a working algorithmic trading platform that will authenticate with Interactive Brokers (IB) and log in, retrieve the market data, and send orders. IB is one of the most popular brokers in the trading community and has a long history of API development. There are plenty of articles on the use of the API available on the Web. IB serves clients ranging from hedge funds to retail traders. Although the API does not support Python directly, Python wrappers such as IbPy are available to make the API calls to the IB interface. The IB API is unique to its own implementation, and every broker has its own API handling methods. Nevertheless, the documents and sample applications provided by your broker would demonstrate the core functionality of every API interface that can be easily integrated into an algorithmic trading system if designed properly. Getting Interactive Brokers' Trader WorkStation The official page for IB is https://www.interactivebrokers.com. Here, you can find a wealth of information regarding trading and investing for retail and institutional traders. In this section, we will take a look at how to get the Trader WorkStation X (TWS) installed and running on your local workstation before setting up an algorithmic trading system using Python. Note that we will perform simulated trading on a demonstration account. If your trading strategy turns out to be profitable, head to the OPEN AN ACCOUNT section of the IB website to open a live trading account. Rules, regulations, market data fees, exchange fees, commissions, and other conditions are subjected to the broker of your choice. In addition, market conditions are vastly different from the simulated environment. You are encouraged to perform extensive testing on your algorithmic trading system before running on live markets. The following key steps describe how to install TWS on your local workstation, log in to the demonstration account, and set it up for API use: From IB's official website, navigate to TRADING, and then select Standalone TWS. Choose the installation executable that is suitable for your local workstation. TWS runs on Java; therefore, ensure that Java runtime plugin is already installed on your local workstation. Refer to the following screenshot: When prompted during the installation process, choose Trader_WorkStation_X and IB Gateway options. The Trader WorkStation X (TWS) is the trading platform with full order management functionality. The IB Gateway program accepts and processes the API connections without any order management features of the TWS. We will not cover the use of the IB Gateway, but you may find it useful later. Select the destination directory on your local workstation where TWS will place all the required files, as shown in the following screenshot: When the installation is completed, a TWS shortcut icon will appear together with your list of installed applications. Double-click on the icon to start the TWS program. When TWS starts, you will be prompted to enter your login credentials. To log in to the demonstration account, type edemo in the username field and demouser in the password field, as shown in the following screenshot: Once we have managed to load our demo account on TWS, we can now set up its API functionality. On the toolbar, click on Configure: Under the Configuration tree, open the API node to reveal further options. Select Settings. Note that Socket port is 7496, and we added the IP address of our workstation housing our algorithmic trading system to the list of trusted IP addresses, which in this case is 127.0.0.1. Ensure that the Enable ActiveX and Socket Clients option is selected to allow the socket connections to TWS: Click on OK to save all the changes. TWS is now ready to accept orders and market data requests from our algorithmic trading system. Getting IbPy – the IB API wrapper IbPy is an add-on module for Python that wraps the IB API. It is open source and can be found at https://github.com/blampe/IbPy. Head to this URL and download the source files. Unzip the source folder, and use Terminal to navigate to this directory. Type python setup.py install to install IbPy as part of the Python runtime environment. The use of IbPy is similar to the API calls, as documented on the IB website. The documentation for IbPy is at https://code.google.com/p/ibpy/w/list. A simple order routing mechanism In this section, we will start interacting with TWS using Python by establishing a connection and sending out a market order to the exchange. Once IbPy is installed, import the following necessary modules into our Python script: from ib.ext.Contract import Contractfrom ib.ext.Order import Orderfrom ib.opt import Connection Next, implement the logging functions to handle calls from the server. The error_handler method is invoked whenever the API encounters an error, which is accompanied with a message. The server_handler method is dedicated to handle all the other forms of returned API messages. The msg variable is a type of an ib.opt.message object and references the method calls, as defined by the IB API EWrapper methods. The API documentation can be accessed at https://www.interactivebrokers.com/en/software/api/api.htm. The following is the Python code for the server_handler method: def error_handler(msg):print "Server Error:", msgdef server_handler(msg):print "Server Msg:", msg.typeName, "-", msg We will place a sample order of the stock AAPL. The contract specifications of the order are defined by the Contract class object found in the ib.ext.Contract module. We will create a method called create_contract that returns a new instance of this object: def create_contract(symbol, sec_type, exch, prim_exch, curr):contract = Contract()contract.m_symbol = symbolcontract.m_secType = sec_typecontract.m_exchange = exchcontract.m_primaryExch = prim_exchcontract.m_currency = currreturn contract The Order class object is used to place an order with TWS. Let's define a method called create_order that will return a new instance of the object: def create_order(order_type, quantity, action):order = Order()order.m_orderType = order_typeorder.m_totalQuantity = quantityorder.m_action = actionreturn order After the required methods are created, we can then begin to script the main functionality. Let's initialize the required variables: if __name__ == "__main__":client_id = 100order_id = 1port = 7496tws_conn = None Note that the client_id variable is our assigned integer that identifies the instance of the client communicating with TWS. The order_id variable is our assigned integer that identifies the order queue number sent to TWS. Each new order requires this value to be incremented sequentially. The port number has the same value as defined in our API settings of TWS earlier. The tws_conn variable holds the connection value to TWS. Let's initialize this variable with an empty value for now. Let's use a try block that encapsulates the Connection.create method to handle the socket connections to TWS in a graceful manner: try:# Establish connection to TWS.tws_conn = Connection.create(port=port,clientId=client_id)tws_conn.connect()# Assign error handling function.tws_conn.register(error_handler, 'Error')# Assign server messages handling function.tws_conn.registerAll(server_handler)finally:# Disconnect from TWSif tws_conn is not None:tws_conn.disconnect() The port and clientId parameter fields define this connection. After the connection instance is created, the connect method will try to connect to TWS. When the connection to TWS has successfully opened, it is time to register listeners to receive notifications from the server. The register method associates a function handler to a particular event. The registerAll method associates a handler to all the messages generated. This is where the error_handler and server_handler methods declared earlier will be used for this occasion. Before sending our very first order of 100 shares of AAPL to the exchange, we will call the create_contract method to create a new contract object for AAPL. Then, we will call the create_order method to create a new Order object, to go long 100 shares. Finally, we will call the placeOrder method of the Connection class to send out this order to TWS: # Create a contract for AAPL stock using SMART order routing.aapl_contract = create_contract('AAPL','STK','SMART','SMART','USD')# Go long 100 shares of AAPLaapl_order = create_order('MKT', 100, 'BUY')# Place order on IB TWS.tws_conn.placeOrder(order_id, aapl_contract, aapl_order) That's it! Let's run our Python script. We should get a similar output as follows: Server Error: <error id=-1, errorCode=2104, errorMsg=Market data farmconnection is OK:ibdemo>Server Response: error, <error id=-1, errorCode=2104, errorMsg=Marketdata farm connection is OK:ibdemo>Server Version: 75TWS Time at connection:20141210 23:14:17 CSTServer Msg: managedAccounts - <managedAccounts accountsList=DU15200>Server Msg: nextValidId - <nextValidId orderId=1>Server Error: <error id=-1, errorCode=2104, errorMsg=Market data farmconnection is OK:ibdemo>Server Msg: error - <error id=-1, errorCode=2104, errorMsg=Market datafarm connection is OK:ibdemo>Server Error: <error id=-1, errorCode=2107, errorMsg=HMDS data farmconnection is inactive but should be available upon demand.demohmds>Server Msg: error - <error id=-1, errorCode=2107, errorMsg=HMDS data farmconnection is inactive but should be available upon demand.demohmds> Basically, what the error messages say is that there are no errors and the connections are OK. Should the simulated order be executed successfully during market trading hours, the trade will be reflected in TWS: The full source code of our implementation is given as follows: """ A Simple Order Routing Mechanism """from ib.ext.Contract import Contractfrom ib.ext.Order import Orderfrom ib.opt import Connectiondef error_handler(msg):print "Server Error:", msgdef server_handler(msg):print "Server Msg:", msg.typeName, "-", msgdef create_contract(symbol, sec_type, exch, prim_exch, curr):contract = Contract()contract.m_symbol = symbolcontract.m_secType = sec_typecontract.m_exchange = exchcontract.m_primaryExch = prim_exchcontract.m_currency = currreturn contractdef create_order(order_type, quantity, action):order = Order()order.m_orderType = order_typeorder.m_totalQuantity = quantityorder.m_action = actionreturn orderif __name__ == "__main__":client_id = 1order_id = 119port = 7496tws_conn = Nonetry:# Establish connection to TWS.tws_conn = Connection.create(port=port,clientId=client_id)tws_conn.connect()# Assign error handling function.tws_conn.register(error_handler, 'Error')# Assign server messages handling function.tws_conn.registerAll(server_handler)# Create AAPL contract and send orderaapl_contract = create_contract('AAPL','STK','SMART','SMART','USD')# Go long 100 shares of AAPLaapl_order = create_order('MKT', 100, 'BUY')# Place order on IB TWS.tws_conn.placeOrder(order_id, aapl_contract, aapl_order)finally:# Disconnect from TWSif tws_conn is not None:tws_conn.disconnect() Summary In this article, we were introduced to the evolution of trading from the pits to the electronic trading platform, and learned how algorithmic trading came about. We looked at some brokers offering API access to their trading service offering. To help us get started on our journey in developing an algorithmic trading system, we used the TWS of IB and the IbPy Python module. In our first trading program, we successfully sent an order to our broker through the TWS API using a demonstration account. Resources for Article: Prototyping Arduino Projects using Python Python functions – Avoid repeating code Pentesting Using Python
Read more
  • 0
  • 0
  • 13995

article-image-hacking-raspberry-pi-project-understand-electronics-first
Packt
29 Apr 2015
20 min read
Save for later

Hacking a Raspberry Pi project? Understand electronics first!

Packt
29 Apr 2015
20 min read
In this article, by Rushi Gajjar, author of the book Raspberry Pi Sensors, you will see the basic requirements needed for building the RasPi projects. You can't spend even a day without electronics, can you? Electronics is everywhere, from your toothbrush to cars and in aircrafts and spaceships too. This article will help you understand the concepts of electronics that can be very useful while working with the RasPi. You might have read many electronics-related books, and they might have bored you with concepts when you really wanted to create or build projects. I believe that there must be a reason for explanations being given about electronics and its applications. Once you know about the electronics, we will walk through the communication protocols and their uses with respect to communication among electronic components and different techniques to do it. Useful tips and precautions are listed before starting to work with GPIOs on the RasPi. Then, you will understand the functionalities of GPIO and blink the LED using shell, Python, and C code. Let's cover some of the fundamentals of electronics. (For more resources related to this topic, see here.) Basic terminologies of electronics There are numerous terminologies used in the world of electronics. From the hardware to the software, there are millions of concepts that are used to create astonishing products and projects. You already know that the RasPi is a single-board computer that contains plentiful electronic components built in, which makes us very comfortable to control and interface the different electronic devices connected through its GPIO port. In general, when we talk about electronics, it is just the hardware or a circuit made up of several Integrated Circuits (ICs) with different resistors, capacitors, inductors, and many more components. But that is not always the case; when we build our hardware with programmable ICs, we also need to take care of internal programming (the software). For example, in a microcontroller or microprocessor, or even in the RasPi's case, we can feed the program (technically, permanently burn/dump the programs) into the ICs so that when the IC is powered up, it follows the steps written in the program and behaves the way we want. This is how robots, your washing machines, and other home appliances work. All of these appliances have different design complexities, which depends on their application. There are some functions, which can be performed by both software and hardware. The designer has to analyze the trade-off by experimenting on both; for example, the decoder function can be written in the software and can also be implemented on the hardware by connecting logical ICs. The developer has to analyze the speed, size (in both the hardware and the software), complexity, and many more parameters to design these kinds of functions. The point of discussing these theories is to get an idea on how complex electronics can be. It is very important for you to know these terminologies because you will need them frequently while building the RasPi projects. Voltage Who discovered voltage? Okay, that's not important now, let's understand it first. The basic concept follows the physics behind the flow of water. Water can flow in two ways; one is a waterfall (for example, from a mountain top to the ground) and the second is forceful flow using a water pump. The concept behind understanding voltage is similar. Voltage is the potential difference between two points, which means that a voltage difference allows the flow of charges (electrons) from the higher potential to the lower potential. To understand the preceding example, consider lightning, which can be compared to a waterfall, and batteries, which can be compared to a water pump. When batteries are connected to a circuit, chemical reactions within them pump the flow of charges from the positive terminal to the negative terminal. Voltage is always mentioned in volts (V). The AA battery cell usually supplies 3V. By the way, the term voltage was named after the great scientist Alessandro Volta, who invented the voltaic cell, which was then known as a battery cell. Current Current is the flow of charges (electrons). Whenever a voltage difference is created, it causes current to flow in a fixed direction from the positive (higher) terminal to the negative (lower) terminal (known as conventional current). Current is measured in amperes (A). The electron current flows from the negative terminal of the battery to the positive terminal. To prevent confusion, we will follow the conventional current, which is from the positive terminal to the negative terminal of the battery or the source. Resistor The meaning of the word "resist" in the Oxford dictionary is "to try to stop or to prevent." As the definition says, a resistor simply prevents the flow of current. When current flows through a resistor, there is a voltage drop in it. This drop directly depends on the amount of current flowing through resistor and value of the resistance. There is a formula used to calculate the amount of voltage drop across the resistor (or in the circuit), which is also called as the Ohm's law (V = I * R). Resistance is measured in ohms (Ω). Let's see how resistance is calculated with this example: if the resistance is 10Ω and the current flowing from the resistor is 1A, then the voltage drop across the resistor is 10V. Here is another example: when we connect LEDs on a 5V supply, we connect a 330Ω resistor in series with the LEDs to prevent blow-off of the LEDs due to excessive current. The resistor drops some voltage in it and safeguards the LEDs. We will extensively use resistors to develop our projects. Capacitor A resistor dissipates energy in the form of heat. In contrast to that, a capacitor stores energy between its two conductive plates. Often, capacitors are used to filter voltage supplied in filter circuits and to generate clear voice in amplifier circuits. Explaining the concept of capacitance will be too hefty for this article, so let me come to the main point: when we have batteries to store energy, why do we need to use capacitors in our circuits? There are several benefits of using a capacitor in a circuit. Many books will tell you that it acts as a filter or a surge suppressor, and they will use terms such as power smoothing, decoupling, DC blocking, and so on. In our applications, when we use capacitors with sensors, they hold the voltage level for some time so that the microprocessor has enough time to read that voltage value. The sensor's data varies a lot. It needs to be stable as long as a microprocessor is reading that value to avoid erroneous calculations. The holding time of a capacitor depends on an RC time constant, which will be explained when we will actually use it. Open circuit and short circuit Now, there is an interesting point to note: when there is voltage available on the terminal but no components are connected across the terminals, there is no current flow, which is often called an open circuit. In contrast, when two terminals are connected, with or without a component, and charge is allowed to flow, it's called a short circuit, connected circuit, or closed circuit. Here's a warning for you: do not short (directly connect) the two terminals of a power supply such as batteries, adaptors, and chargers. This may cause serious damages, which include fire damage and component failure. If we connect a conducting wire with no resistance, let's see what Ohm's law results in: R = 0Ω then I = V/0, so I = ∞A. In theory, this is called infinite (uncountable), and practically, it means a fire or a blast! Series and parallel connections In electrical theory, when the current flowing through a component does not divide into paths, it's a series connection. Also, if the current flowing through each component is the same then those components are said to be in series. If the voltage across all the components is the same, then the connection is said to be in parallel. In a circuit, there can be combination of series and parallel connections. Therefore, a circuit may not be purely a series or a parallel circuit. Let's study the circuits shown in the following diagram: Series and parallel connections At the first glance, this figure looks complex with many notations, but let's look at each component separately. The figure on the left is a series connection of components. The battery supplies voltage (V) and current (I). The direction of the current flow is shown as clockwise. As explained, in a series connection, the current flowing through every component is the same, but the voltage values across all the components are different. Hence, V = V1 + V2 + V3. For example, if the battery supplies 12V, then the voltage across each resistor is 4V. The current flowing through each resistor is 4 mA (because V = IR and R = R1 + R2 + R3 = 3K). The figure on the right represents a parallel connection. Here, each of the components gets the same voltage but the current is divided into different paths. The current flowing from the positive terminal of the battery is I, which is divided into I1 and I2. When I1 flows to the next node, it is again divided into two parts and flown through R5 and R6. Therefore, in a parallel circuit, I = I1 + I2. The voltage remains the same across all the resistors. For example, if the battery supplies 12V, the voltage across all the resistors is 12V but the current through all the resistors will be different. In the parallel connection example, the current flown through each circuit can be calculated by applying the equations of current division. Give it a try to calculate! When there is a combination of series and parallel circuits, it needs more calculations and analysis. Kirchhoff's laws, nodes, and mesh equations can be used to solve such kinds of circuits. All of that is too complex to explain in this article; you can refer any standard circuits-theory-related books and gain expertise in it. Kirchhoff's current law: At any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. Kirchhoff's voltage law: The directed sum of the electrical potential differences (voltage) around any closed network is zero. Pull-up and pull-down resistors Pull-up and pull-down resistors are one of the important terminologies in electronic systems design. As the title says, there are two types of pulling resistors: pull-up and pull-down. Both have the same functionality, but the difference is that pull-up resistor pulls the terminal to the voltage supplied and the pull-down resistor pulls the terminal to the ground or the common line. The significance of connecting a pulling resistor to a node or terminal is to bring back the logic level to the default value when no input is present on that particular terminal. The benefit of including a pull-up or pull-down resistor is that it makes the circuitry susceptible to noise, and the logic level (1 or 0) cannot be changed from a small variation in terms of voltages (due to noise) on the terminal. Let's take a look at the example shown in the following figure. It shows a pull-up example with a NOT gate (a NOT gate gives inverted output in its OUT terminal; therefore, if logic one is the input, the output is logic zero). We will consider the effects with and without the pull-up resistor. The same is true for the pull-down resistor. Connection with and without pull-up resistors In general, logic gates have high impedance at their input terminal, so when there is no connection on the input terminal, it is termed as floating. Now, in the preceding figure, the leftmost connection is not recommended because when the switch is open (OFF state), it leaves the input terminal floating and any noise can change the input state of the NOT gate. The reason of the noise can be any. Even the open terminals can act as an antenna and can create noise on the pin of the NOT gate. The circuit shown in the middle is a pull-up circuit without a resistor and it is highly recommended not to use it. This kind of connection can be called a pull-up but should never be used. When the switch is closed (ON state), the VCC gets a direct path to the ground, which is the same as a short circuit. A large amount of current will flow from VCC to ground, and this can damage your circuit. The rightmost figure shows the best way to pull up because there is a resistor in which some voltage drop will occur. When the switch is open, the terminal of the NOT gate will be floated to the VCC (pulled up), which is the default. When the switch is closed, the input terminal of the NOT gate will be connected to the ground and it will experience the logic zero state. The current flowing through the resistor will be nominal this time. For example, if VCC = 5V, R7 = 1K, and I = V/R, then I = 5mA, which is in the safe region. For the pull-down circuit example, there can be an interchange between the switch and a resistor. The resistor will be connected between the ground and the input terminal of the NOT gate. When using sensors and ICs, keep in mind that if there is a notation of using pull-ups or pull-downs in datasheets or technical manuals, it is recommended to use them wherever needed. Communication protocols It has been a lot theory so far. There can be numerous components, including ICs and digital sensors, as peripherals of a microprocessor. There can be a large amount of data with the peripheral devices, and there might be a need to send it to the processor. How do they communicate? How does the processor understand that the data is coming into it and that it is being sent by the sensor? There is a serial, or parallel, data-line connection between ICs and a microprocessor. Parallel connections are faster than the serial one but are less preferred because they require more lines, for example, 8, 16, or more than that. A PCI bus can be an example of a parallel communication. Usually in a complex or high-density circuit, the processor is connected to many peripherals, and in that case, we cannot have that many free pins/lines to connect an additional single IC. Serial communication requires up to four lines, depending on the protocol used. Still, it cannot be said that serial communication is better than parallel, but serial is preferred when low pin counts come into the picture. In serial communication, data is sent over frames or packets. Large data is broken into chunks and sent over the lines by a frame or a packet. Now, what is a protocol? A protocol is a set of rules that need to be followed while interfacing the ICs to the microprocessor, and it's not limited to the connection. The protocol also defines the data frame structures, frame lengths, voltage levels, data types, data rates, and so on. There are many standard serial protocols such as UART, FireWire, Ethernet, SPI, I2C, and more. The RasPi 1 models B, A+, B+, and the RasPi 2 model B have one SPI pin, one I2C pin, and one UART pin available on the expansion port. We will see these protocols one by one. UART UART is a very common interface, or protocol, that is found in almost every PC or microprocessor. UART is the abbreviated form of Universal Asynchronous Receiver and Transmitter. This is also known as the RS-232 standard. This protocol is full-duplex and a complete standard, including electrical, mechanical, and physical characteristics for a particular instance of communication. When data is sent over a bus, the data levels need to be changed to suit the RS-232 bus levels. Varying voltages are sent by a transmitter on a bus. A voltage value greater than 3V is logic zero, while a voltage value less than -3V is logic one. Values between -3V to 3V are called as undefined states. The microprocessor sends the data to the transistor-transistor logic (TTL) level; when we send them to the bus, the voltage levels should be increased to the RS-232 standard. This means that to convert voltage from logic levels of a microprocessor (0V and 5V) to these levels and back, we need a level shifter IC such as MAX232. The data is sent through a DB9 connector and an RS-232 cable. Level shifting is useful when we communicate over a long distance. What happens when we need to connect without these additional level shifter ICs? This connection is called a NULL connection, as shown in the following figure. It can be observed that the transmit and receive pins of a transmitter are cross-connected, and the ground pins are shared. This can be useful in short-distance communication. In UART, it is very important that the baud rates (symbols transferred per second) should match between the transmitter and the receiver. Most of the time, we will be using 9600 or 115200 as the baud rates. The typical frame of UART communication consists of a start bit (usually 0, which tells receiver that the data stream is about to start), data (generally 8 bit), and a stop bit (usually 1, which tells receiver that the transmission is over). Null UART connection The following figure represents the UART pins on the GPIO header of the RasPi board. Pin 8 and 10 on the RasPi GPIO pin header are transmit and receive pins respectively. Many sensors do have the UART communication protocol enabled on their output pins. Sensors such as gas sensors (MQ-2) use UART communication to communicate with the RasPi. Another sensor that works on UART is the nine-axis motion sensor from LP Research (LPMS-UARTL), which allows you to make quadcopters on your own by providing a three-axis gyroscope, three-axis magnetometer, and three-axis accelerometer. The TMP104 sensor from Texas instruments comes with UART interface digital temperature sensors. Here, the UART allows daisy-chain topology (in which you connect one's transmit to the receive of the second, the second's transmit to the third's receive, and so on up to eight sensors). In a RasPi, there should be a written application program with the UART driver in the Python or C language to obtain the data coming from a sensor. Serial Peripheral Interface The Serial Peripheral Interface (SPI) is a full-duplex, short-distance, and single-master protocol. Unlike UART, it is a synchronous communication protocol. One of the simple connections can be the single master-slave connection, which is shown in the next figure. There are usually four wires in total, which are clock, Master In Slave Out (MISO), Master Out Slave In (MOSI), and chip select (CS). Have a look at the following image: Simple master-slave SPI connections The master always initiates the data frame and clock. The clock frequencies can be varied from the master according to the slave's performance and capabilities. The clock frequency varies from 1 MHz to 40 MHz, and higher too. Some slave devices trigger on active low input, which means that whenever the logic zero signal is given by the master to slave on the CS pin, the slave chip is turned ON. Then it accepts the clock and data from master. There can be multiple slaves connected to a master device. To connect multiple slaves, we need additional CS lines from the master to be connected with the slaves. This can be one of the disadvantages of the SPI communication protocol, when slaves are increased. There is no slave acknowledgement sent to the master, so the master sends data without knowing whether the slave has received it or not. If both the master and the slave are programmable, then during runtime (while executing the program), the master and slave actions can be interchanged. For the RasPi, we can easily write the SPI communication code in either Python or C. The location of the SPI pins on RasPi 1 models A+ and B+ and RasPi 2 model B can be seen in the following diagram. This diagram is still valid for RasPi 1 model B: Inter-Integrated Circuit Inter-Integrated Circuit (I2C) is a protocol that works with two wires and it is a half-duplex (a type of communication where whenever the sender sends the command, the receiver just listens and cannot transmit anything; and vice versa), multimaster protocol that requires only two wires, known as data (SDA) and clock (SCL). The I2C protocol is patented by Philips, and whenever an IC manufacturer wants to include I2C in their chip, they need a license. Many of the ICs and peripherals around us are integrated with the I2C communication protocol. The lines of I2C (SDA and SCL) are always pulled up via resistors to the input voltage. The I2C bus works in three speeds: high speed (3.4 MBps), fast (400 KBps), and slow (less than 100 KBps). It is heard that the I2C communication is done up to 45 feet, but it's better to keep it under 10 feet. Each I2C device has an address of 7 to 10 bits; using this address, the master can always connect and send data meant for that particular slave. The slave device manufacturer provides you with the address to use when you are interfacing the device with the master. Data is received at every slave, but only that slave can take the data for which it is made. Using the address, the master reads the data available in the predefined data registers in the sensors, and processes it on its own. The general setup of an I2C bus configuration can be done as shown in the following diagram: I2C bus interface There are 16 x 2 character LCD modules available with the I2C interface in stores; you can just use them and program the RasPi accordingly. Usually, the LCD requires 8/4 wire parallel data bits, reset, read/write, and enable pins. The I2C pins are represented in the following image, and they can be located in the same place on all the RasPi models: The I2C protocol is the most widely used protocol among all when we talk about sensor interfacing. Silicon Labs' Si1141 is a proximity and brightness sensor that is nowadays used in mobile phones to provide the auto-brightness and near proximity features. You can purchase it and easily interface it with the RasPi. SHT20 from Sensirion also comes with the I2C protocol, and it can be used to measure temperature and humidity data. Stepper motor control can be done using I2C-based controllers, which can be interfaced with the RasPi. The most amazing thing is that if you have all of these sensors, then you can tie them to a single I2C, but with RasPi you can get the data! The modules with the I2C interface are available for low-pin-count devices. This is why serial communication is useful. These protocols are mostly used with the RasPi. The information given here about them is not that detailed, as numerous pages can be written on these protocols, but while programming the RasPi, this much information can help you build the projects. Summary In this article, you understood the electronics fundamentals that are really going to help you go ahead with a bit more complex projects. It was not all about electronics, but about all the essential concepts that are needed to build the RasPi projects. After covering the concepts of electronics, we took a dive into the communication protocols; it was interesting to know how the electronic devices work together. You learned that just as humans talk to each other in a common language, they also talk to each other using a common protocol. Resources for Article: Further resources on this subject: Testing Your Speed [article] Creating a 3D world to roam in [article] Webcam and Video Wizardry [article]
Read more
  • 0
  • 0
  • 4427

article-image-working-data-forms
Packt
28 Apr 2015
13 min read
Save for later

Working with Data in Forms

Packt
28 Apr 2015
13 min read
In this article by Mindaugas Pocius, the author of Microsoft Dynamics AX 2012 R3 Development Cookbook, explains about data organization in the forms. We will cover the following recipes: Using a number sequence handler Creating a custom filter control Creating a custom instant search filter (For more resources related to this topic, see here.) Using a number sequence handler Number sequences are widely used throughout the system as a part of the standard application. Dynamics AX also provides a special number sequence handler class to be used in forms. It is called NumberSeqFormHandler, and its purpose is to simplify the usage of record numbering on the user interface. Some of the standard Dynamics AX forms, such as Customers or Vendors, already have this feature implemented. This recipe shows you how to use the number sequence handler class. Although in this demonstration we will use an existing form, the same approach will be applied when creating brand-new forms. For demonstration purposes, we will use the existing Customer groups form located in Accounts receivable | Setup | Customers and change the Customer group field from manual to automatic numbering. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, open the CustGroup form and add the following code snippet to its class declaration: NumberSeqFormHandler numberSeqFormHandler; Also, create a new method called numberSeqFormHandler() in the same form: NumberSeqFormHandler numberSeqFormHandler() {    if (!numberSeqFormHandler)    {        numberSeqFormHandler = NumberSeqFormHandler::newForm(            CustParameters::numRefCustGroupId().NumberSequenceId,            element,            CustGroup_ds,            fieldNum(CustGroup,CustGroup));    }    return numberSeqFormHandler; } In the same form, override the CustGroup data source's create() method with the following code snippet: void create(boolean _append = false) {    element.numberSeqFormHandler(        ).formMethodDataSourceCreatePre();       super(_append);      element.numberSeqFormHandler(        ).formMethodDataSourceCreate(); } Then, override its delete() method with the following code snippet: void delete() {    ttsBegin;      element.numberSeqFormHandler().formMethodDataSourceDelete();      super();      ttsCommit; } Then, override the data source's write() method with the following code snippet: void write() {    ttsBegin;      super();      element.numberSeqFormHandler().formMethodDataSourceWrite();      ttsCommit; } Similarly, override its validateWrite() method with the following code snippet: boolean validateWrite() {    boolean ret;      ret = super();      ret = element.numberSeqFormHandler(        ).formMethodDataSourceValidateWrite(ret) && ret;      return ret; } In the same data source, override its linkActive() method with the following code snippet: void linkActive() {    element.numberSeqFormHandler(        ).formMethodDataSourceLinkActive();      super(); } Finally, override the form's close() method with the following code snippet: void close() {    if (numberSeqFormHandler)    {        numberSeqFormHandler.formMethodClose();    }      super(); } In order to test the numbering, navigate to Accounts receivable | Setup | Customers | Customer groups and try to create several new records—the Customer group value will be generated automatically: How it works... First, we declare an object of type NumberSeqFormHandler in the form's class declaration. Then, we create a new corresponding form method called numberSeqFormHandler(), which instantiates the object if it is not instantiated yet and returns it. This method allows us to hold the handler creation code in one place and reuse it many times within the form. In this method, we use the newForm() constructor of the NumberSeqFormHandler class to create the numberSeqFormHandler object. It accepts the following arguments: The number sequence code ensures a proper format of the customer group numbering. Here, we call the numRefCustGroupId() helper method from the CustParameters table to find which number sequence code will be used when creating a new customer group record. The FormRun object, which represents the form itself. The form data source, where we need to apply the number sequence handler. The field ID into which the number sequence will be populated. Finally, we add the various NumberSeqFormHandler methods to the corresponding methods on the form's data source to ensure proper handling of the numbering when various events are triggered. Creating a custom filter control Filtering in forms in Dynamics AX is implemented in a variety of ways. As a part of the standard application, Dynamics AX provides various filtering options, such as Filter By Selection, Filter By Grid, or Advanced Filter/Sort that allows you to modify the underlying query of the currently displayed form. In addition to the standard filters, the Dynamics AX list pages normally allow quick filtering on most commonly used fields. Besides that, some of the existing forms have even more advanced filtering options, which allow users to quickly define complex search criteria. Although the latter option needs additional programming, it is more user-friendly than standard filtering and is a very common request in most of the Dynamics AX implementations. In this recipe, we will learn how to add custom filters to a form. We will use the Main accounts form as a basis and add a few custom filters, which will allow users to search for accounts based on their name and type. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, locate the MainAccountListPage form and change the following property for its Filter group: Property Value Columns 2 In the same group, add a new StringEdit control with the following properties: Property Value Name FilterName AutoDeclaration Yes ExtendedDataType AccountName Add a new ComboBox control to the same group with the following properties: Property Value Name FilterType AutoDeclaration Yes EnumType DimensionLedgerAccountType Selection 10 Override the modified() methods for both the newly created controls with the following code snippet: boolean modified() {    boolean ret;      ret = super();      if (ret)    {        MainAccount_ds.executeQuery();    }      return ret; } After all modifications, in the AOT, the MainAccountListPage form will look similar to the following screenshot: In the same form, update the executeQuery() method of the MainAccount data source as follows: public void executeQuery() {    QueryBuildRange qbrName;    QueryBuildRange qbrType;      MainAccount::updateBalances();      qbrName = SysQuery::findOrCreateRange(        MainAccount_q.dataSourceTable(tableNum(MainAccount)),        fieldNum(MainAccount,Name));      qbrType = SysQuery::findOrCreateRange(        MainAccount_q.dataSourceTable(tableNum(MainAccount)),        fieldNum(MainAccount,Type));      if (FilterName.text())    {        qbrName.value(SysQuery::valueLike(queryValue(            FilterName.text())));    }    else  {        qbrName.value(SysQuery::valueUnlimited());    }      if (FilterType.selection() ==        DimensionLedgerAccountType::Blank)    {        qbrType.value(SysQuery::valueUnlimited());    }    else    {        qbrType.value(queryValue(FilterType.selection()));    }      super(); } In order to test the filters, navigate to General ledger | Common | Main accounts and change the values in the newly created filters—the account list will change reflecting the selected criteria: Click on the Advanced Filter/Sort button in the toolbar to inspect how the criteria was applied in the underlying query (note that although changing the filter values here will affect the search results, the earlier created filter controls will not reflect those changes): How it works... We start by changing the Columns property of the existing empty Filter group control to make sure all our controls are placed from the left to the right in one line. We add two new controls that represent the Account name and Main account type filters and enable them to be automatically declared for later usage in the code. We also override their modified() event methods to ensure that the MainAccount data source's query is re-executed whenever the controls' value change. All the code is placed in the executeQuery() method of the form's data source. The code has to be placed before super() to make sure the query is modified before fetching the data. Here, we declare and create two new QueryBuildRange objects, which represent the ranges on the query. We use the findOrCreateRange() method of the SysQuery application class to get the range object. This method is very useful and important, as it allows you to reuse previously created ranges. Next, we set the ranges' values. If the filter controls are blank, we use the valueUnlimited() method of the SysQuery application class to clear the ranges. If the user types some text into the filter controls, we pass those values to the query ranges. The global queryValue() function—which is actually a shortcut to SysQuery::value()—ensures that only safe characters are passed to the range. The SysQuery::valueLike() method adds the * character around the account name value to make sure that the search is done based on partial text. Note that the SysQuery helper class is very useful when working with queries, as it does all kinds of input data conversions to make sure they can be safely used. Here is a brief summary of few other useful methods in the SysQuery class: valueUnlimited(): This method returns a string representing an unlimited query range value, that is, no range at all. value(): This method converts an argument into a safe string. The global queryValue() method is a shortcut for this. valueNot(): This method converts an argument into a safe string and adds an inversion sign in front of it. Creating a custom instant search filter The standard form filters and majority of customized form filters in Dynamics AX are only applied once the user presses some button or key. It is acceptable in most cases, especially if multiple criteria are used. However, when the result retrieval speed and usage simplicity has priority over system performance, it is possible to set up the search so the record list is updated instantly when the user starts typing. In this recipe, to demonstrate the instant search, we will modify the Main accounts form. We will add a custom Account name filter, which will update the account list automatically when the user starts typing. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, open the MainAccountListPage form and add a new StringEdit control with the following properties to the existing Filter group: Property Value Name FilterName AutoDeclaration Yes ExtendedDataType AccountName Override the control's textChange() method with the following code snippet: void textChange() {    super();      MainAccount_ds.executeQuery(); } On the same control, override the control's enter() method with the following code snippet: void enter() {    super();    this.setSelection(        strLen(this.text()),        strLen(this.text())); } Update the executeQuery() method of the MainAccount data source as follows: public void executeQuery() {    QueryBuildRange qbrName;      MainAccount::updateBalances();      qbrName = SysQuery::findOrCreateRange(        this.queryBuildDataSource(),        fieldNum(MainAccount,Name));      qbrName.value(        FilterName.text() ?        SysQuery::valueLike(queryValue(FilterName.text())) :        SysQuery::valueUnlimited());      super(); } In order to test the search, navigate to General ledger | Common | Main accounts and start typing into the Account name filter. Note how the account list is being filtered automatically: How it works... Firstly, we add a new control, which represents the Account name filter. Normally, the user's typing triggers the textChange() event method on the active control every time a character is entered. So, we override this method and add the code to re-execute the form's query whenever a new character is typed in. Next, we have to correct the cursor's behavior. Currently, once the user types in the first character, the search is executed and the system moves the focus out of this control and then moves back into the control selecting all the typed text. If the user continues typing, the existing text will be overwritten with the new character and the loop will continue. In order to get around this, we have to override the control's enter() event method. This method is called every time the control receives a focus whether it was done by a user's mouse, key, or by the system. Here, we call the setSelection() method. Normally, the purpose of this method is to mark a control's text or a part of it as selected. Its first argument specifies the beginning of the selection and the second one specifies the end. In this recipe, we are using this method in a slightly different way. We pass the length of the typed text as a first argument, which means the selection starts at the end of the text. We pass the same value as a second argument, which means that selection ends at the end of the text. It does not make any sense from the selection point of view, but it ensures that the cursor always stays at the end of the typed text allowing the user to continue typing. The last thing to do is to add some code to the executeQuery() method to change the query before it is executed. Modifying the query was discussed in detail in the Creating a custom filter control recipe. The only thing to note here is that we use the SysQuery::valueLike() helper method which adds * to the beginning and the end of the search string to make the search by a partial string. Note that the system's performance might be affected as the data search is executed every time the user types in a character. It is not recommended to use this approach for large tables. See also The Creating a custom filter control recipe Summary In this article, we learned how to add custom filters to forms to allow users to filter data and create record lists for quick data manipulation. We also learned how to build filter controls on forms and how to create custom instant search filters. Resources for Article: Further resources on this subject: Reporting Based on SSRS [article] Installing and Setting up Sure Step [article] Exploring Financial Reporting and Analysis [article]
Read more
  • 0
  • 0
  • 5098
article-image-raspberry-pi-and-1-wire
Packt
28 Apr 2015
13 min read
Save for later

Raspberry Pi and 1-Wire

Packt
28 Apr 2015
13 min read
In this article by Jack Creasey, author of Raspberry Pi Essentials, we will learn about the remote input/output technology and devices that can be used with the Raspberry Pi. We will also specifically learn about 1-wire, and how it can be interfaced with the Raspberry Pi. The concept of remote I/O has its limitations, for example, it requires locating the Pi where the interface work needs to be done—it can work well for many projects. However, it can be a pain to power the Pi in remote locations where you need the I/O to occur. The most obvious power solutions are: Battery-powered systems and, perhaps, solar cells to keep the unit functional over longer periods of time Power over Ethernet (POE), which provides data connection and power over the same Ethernet cable which achieved up to 100 meters, without the use of a repeater. AC/DC power supply where a local supply is available Connecting to Wi-Fi could also be a potential but problematic solution because attenuation through walls impacts reception and distance. Many projects run a headless and remote Pi to allow locating it closer to the data acquisition point. This strategy may require yet another computer system to provide the Human Machine Interface (HMI) to control a remote Raspberry Pi. (For more resources related to this topic, see here.) Remote I/O I’d like to introduce you to a very mature I/O bus as a possibility for some of your Raspberry Pi projects; it’s not fast, but it’s simple to use and can be exceptionally flexible. It is called 1-Wire, and it uses endpoint interface chips that require only two wires (a data/clock line and ground), and they are line powered apart from possessing a few advanced functionality devices. The data rate is usually 16 kbps and the 1-Wire single master driver will handle distances up to approximately 200 meters on simple telephone wire. The system was developed by Dallas Semiconductor back in 1990, and the technology is now owned by Maxim. I have a few 1-wire iButton memory chips from 1994 that still work just fine. While you can get 1-Wire products today that are supplied as surface mount chips, 1-Wire products really started with the practically indestructible iButtons. These consist of a stainless steel coin very similar to the small CR2032 coin batteries in common use today. They come in 3 mm and 6 mm thicknesses and can be attached to a key ring carrier. I’ll cover a Raspberry Pi installation to read these iButtons in this article. The following image shows the dimensions for the iButton, the key ring carriers, and some available reader contacts: The 1-Wire protocol The master provides all the timing and power when addressing and transferring data to and from 1-Wire devices. A 1-Wire bus looks like this: When the master is not driving the bus, it’s pulled high by a resistor, and all the connected devices have an internal capacitor, which allows them to store energy. When the master pulls the bus low to send data bits, the bus devices use their internal energy store just like a battery, which allows them to sense inbound data, and to drive the bus low when they need to return data. The following typical block diagram shows the internal structure of a 1-Wire device and the range of functions it could provide: There are lots of data sheets on the 1-Wire devices produced by Maxim, Microchip, and other processor manufacturers. It’s fun to go back to the 1989 patent (now expired) by Dallas and see how it was originally conceived (http://www.google.com/patents/US5210846). Another great resource to learn the protocol details is at http://pdfserv.maximintegrated.com/en/an/AN937.pdf. To look at a range of devices, go to http://www.maximintegrated.com/en/products/comms/one-wire.html. For now, all you need to know is that all the 1-Wire devices have a basic serial number capability that is used to identify and talk to a given device. This silicon serial number is globally unique. The initial transactions with a device involve reading a 64-bit data structure that contains a 1-byte family code (device type identifier), a 6-byte globally unique device serial number, and a 1-byte CRC field, as shown in the following diagram: The bus master reads the family code and serial number of each device on the bus and uses it to talk to individual devices when required. Raspberry Pi interface to 1-Wire There are three primary ways to interface to the 1-Wire protocol devices on the Raspberry Pi: W1-gpio kernel: This module provides bit bashing of a GPIO port to support the 1-Wire protocol. Because this module is not recommended for multidrop 1-Wire Microlans, we will not consider it further. DS9490R USB Busmaster interface: This is used in a commercial 1-Wire reader supplied by Maxim (there are third-party copies too) and will function on most desktop and laptop systems as well as the Raspberry Pi. For further information on this device, go to http://datasheets.maximintegrated.com/en/ds/DS9490-DS9490R.pdf. DS2482 I2C Busmaster interface: This is used in many commercial solutions for 1-Wire. Typically, the boards are somewhat unique since they are built for particular microcomputer versions. For example, there are variants produced for the Raspberry Pi and for Arduino. For further reading on these devices, go to http://www.maximintegrated.com/en/app-notes/index.mvp/id/3684. I chose a unique Raspberry Pi solution from AB Electronics based on the I2C 1-Wire DS2482-100 bridge. The following image shows the 1-Wire board with an RJ11 connector for the 1-Wire bus and the buffered 5V I2C connector pins shown next to it: For the older 26-pin GPIO header, go to https://www.abelectronics.co.uk/products/3/Raspberry-Pi/27/1-Wire-Pi, and for the newer 40-pin header, go to https://www.abelectronics.co.uk/products/17/Raspberry-Pi--Raspberry-Pi-2-Model-B/60/1-Wire-Pi-Plus. This board is a superb implementation (IMHO) with ESD protection for the 1-Wire bus and a built-in level translator for 3.3-5V I2C buffered output available on a separate connector. Address pins are provided, so you could install more boards to support multiple isolated 1-Wire Microlan cables. There is just one thing that is not great in the board—they could have provided one or two iButton holders instead of the prototyping area. The schematic for the interface is shown in the following diagram: 1-Wire software for the Raspberry Pi The OWFS package supports reading and writing to 1-Wire devices over USB, I2C, and serial connection interfaces. It will also support the USB-connected interface bridge, the I2C interface bridge, or both. Before we install the OWFS package, let’s ensure that I2C works correctly so that we can attach the board to the Pi's motherboard. The following are the steps for the 1-Wire software installation on the Raspberry Pi. Start the raspi-config utility from the command line: sudo raspi-config Select Advanced Options, and then I2C: Select Yes to enable I2C and then click on OK. Select Yes to load the kernel module and then click on OK. Lastly, select Finish and reboot the Pi for the settings to take effect. If you are using an early raspi-config (you don’t have the aforementioned options) you may have to do the following: Enter the sudo nano /etc/modprobe.d/raspi-blacklist.conf command. Delete or comment out the line: blacklist i2c-bcm2708 Save the file. Edit the modules loaded using the following command: sudo nano /etc/modules Once you have the editor open, perform the following steps: Add the line i2c-dev in its own row. Save the file. Update your system using sudo apt-get update, and sudo apt-get upgrade. Install the i2c-tools using sudo apt-get install –y i2c-tools. Lastly, power down the Pi and attach the 1-Wire board. If you power on the Pi again, you will be ready to test the board functionality and install the OWFS package: Now, let’s check that I2C is working and the 1-Wire board is connected: From the command line, type i2cdetect –l. This command will print out the detected I2C bus; this will usually be i2c-1, but on some early Pis, it may be i2c-0. From the command line, type sudo i2cdetect –y 1.This command will print out the results of an i2C bus scan. You should have a device 0x18 in the listing as shown in the following screenshot; this is the default bridge adapter address. Finally, let's install OWFS: Install OWFS using the following command: sudo apt-get install –y owfs When the install process ends, the OWFS tasks are started, and they will restart automatically each time you reboot the Raspberry Pi. When OWFS starts, to get its startup settings, it reads a configuration file—/etc/owfs.conf. We will edit this file soon to reconfigure the settings. Start Task Manager, and you will see the OWFS processes as shown in the following screenshot; there are three processes, which are owserver, owhttpd, and owftpd: The default configuration file for OWFS uses fake devices, so you don’t need any hardware attached at this stage. We can observe the method to access an owhttpd server by simply using the web browser. By default, the HTTP daemon is set to the localhost:2121 address, as shown in the following screenshot: You will notice that two fake devices are shown on the web page, and the numerical identities use the naming convention xx.yyyyyyyyyyyy. These are hex digits representing x as the device family and y as the serial number. You can also examine the details of the information for each device and see the structure. For example, the xx=10 device is a temperature sensor (DS18S20), and its internal pages show the current temperature ranges. You can find details of the various 1-Wire devices by following the link to the OWFS home page at the top of the web page. Let’s now reconfigure OWFS to address devices on the hardware bridge board we installed: Edit the OWFS configuration file using the following command: sudo nano /etc/owfs.conf Once the editor is open: Comment out the server: FAKE device line. Comment out the ftp: line. Add the line: server: i2c = /dev/i2c-1:0. Save the file. Since we only need bare minimum information in the owfs.conf file, the following minimized file content will work: ######################## SOURCES ######################## # # With this setup, any client (but owserver) uses owserver on the # local machine... # ! server: server = localhost:4304 # # I2C device: DS2482-100 or DS2482-800 # server: i2c = /dev/i2c-1:0 # ####################### OWHTTPD #########################   http: port = 2121   ####################### OWSERVER ########################   server: port = localhost:4304 You will find that it’s worth saving the original file from the installation by renaming it and then creating your own minimized file as shown in the preceding code Once you have the owfs.conf file updated, you can reboot the Raspberry Pi and the new settings will be used. You should have only the owserver and owhttpd processes running now, and the localhost:2121 web page should show only the devices on the single 1-Wire net that you have connected to your board. The owhttpd server can of course be addressed locally as localhost:2121 or accessed from remote computers using the IP address of the Raspberry Pi. The following screenshot shows my 1-Wire bus results using only one connected device (DS1992-family 08): At the top level, the device entries are cached. They will remain visible for at least a minute after you remove them. If you look instead at the uncached entries, they reflect instantaneous arrival and removal device events. You can use the web page to reconfigure all the timeouts and cache values, and the OWFS home page provides the details. Program access to the 1-Wire bus You can programmatically query and write to devices on the 1-Wire bus using the following two methods (there are of course other ways of doing this). Both these methods indirectly read and write using the owserver process: You can use command-line scripts (Bash) to read and write to 1-Wire devices. The following steps show you to get program access to the 1-Wire bus: From the command-line, install the shell support using the following command: sudo apt-get install –y ow-shell The command-line utilities are owget, owdir, owread, and owwrite. While in a multi-section 1-Wire Microlan, you need to specify the bus number, in our simple case with only one 1-Wire Microlan, you can type owget or owdir at the command line to read the device IDs, for example, my Microlan returned: Notice that the structure of the 1-Wire devices is identical to that exposed on the web page, so with the shell utilities, you can write Bash scripts to read and write device parameters. You can use Python to read and write to the 1-Wire devices. Install the Python OWFS module with the following command: sudo apt-get install –y python-ow Open the Python 2 IDLE environment from the Menu, and perform the following steps: In Python Shell, open a new editor window by navigating to File | New Window. In the Editor window, enter the following program: #! /usr/bin/python import ow import time ow.init('localhost:4304') while True:    mysensors = ow.Sensor("/uncached").sensorList( )    for sensor in mysensors[:]:        thisID = sensor.address[2:12]        print sensor.type, "ID = ", thisID    time.sleep(0.5) Save the program as testow.py. You can run this program from the IDLE environment, and it will print out the IDs of all the devices on the 1-Wire Microlan every half second. And if you need help on the python-pw package, then type import ow in the Shell window followed by help(ow) to print the help file. Summary We’ve covered just enough here to get you started with 1-Wire devices for the Raspberry Pi. You can read up on the types of devices available and their potential uses at the web links provided in this article. While the iButton products are obviously great for identity-based projects, such as door openers and access control, there are 1-Wire devices that provide digital I/O and even analog-to-digital conversion. These can be very useful when designing remote acquisition and control interfaces for your Raspberry Pi. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Raspberry Pi Gaming Operating Systems [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 35676

article-image-administration-configuration-manager-through-powershell
Packt
28 Apr 2015
15 min read
Save for later

Administration of Configuration Manager through PowerShell

Packt
28 Apr 2015
15 min read
When we are required to automate a few activities in Configuration Manager, we need to use any of the scripting languages, such as VB or PowerShell. PowerShell has its own advantages over other scripting languages. In this article by Guruprasad HP, coauthor of the book Microsoft System Center Powershell Essentials we will cover: The introduction of Configuration Manager through PowerShell Hierarchy details Asset and compliance (For more resources related to this topic, see here.) Introducing Configuration Manager through PowerShell The main intention of this article is to give you a brief idea of how to use PowerShell with Configuration Manager and not to make you an expert with all the cmdlets. With the goal of introducing Configuration Manager admins to PowerShell, this article mainly covers how to use PowerShell cmdlets to get the information about Configuration Manager configurations and how to create our own custom configurations using PowerShell. Just like you cannot get complete information of any person during the first meet, you cannot expect everything in this article. This article starts with an assumption that we have a well-built Configuration Manager environment. To start with, let's first understand how to fetch details from Configuration Manager. After that, we will create our own custom configurations. To stick on to convention, we will first learn how to fetch configuration details from Configuration Manager followed by a demonstration of how to create our own custom configurations using PowerShell. PowerShell provides around 560 different cmdlets to administrate and manage Configuration Manager. You can verify the cmdlets counts for Configuration Manager by using the count operation with the Get-Command cmdlet with ConfigurationManager as the module parameter: (Get-Command –Module ConfigurationManager).Count It is always a good idea to export all the cmdlets to an external file that you can use as a reference at any point of time. You can export the cmdlets by using Out-File with the Get-Command cmdlet: Get-Command –Module ConfigurationManager | Out-File "D:SCCMPowerShellCmdlets.txt" Once we have the Configuration Manager infrastructure ready, we can start validating the configurations through the PowerShell console. Here are the quick cmdlets that help to verify the Configuration Manager configurations followed by cmdlets to create custom configurations. Since PowerShell follows a verb-noun sequence, we can easily identify the cmdlets that help to check configurations as they start with Get. Similarly, cmdlets to create new configurations will typically start with New, Start, or set. We can always refer to the Microsoft TechNet page at http://technet.microsoft.com/en-us/library/jj821831(v=sc.20).aspx for the latest list of all the available cmdlets. Before proceeding further, we have to set the execution location from the current drive to System Center Configuration Manager (SCCM) to avail the benefit of using PowerShell for the administration of SCCM. To connect, we can use the Set-Location cmdlet with the site code as the parameter or we can open PowerShell from the Configuration Manager console. Assuming we have P01 as the site code, we can connect to Configuration Manager using PowerShell by executing the following command: Set-Location P01: Hierarchy details This section will concentrate on how to get the Configuration Manager site details and how to craft our own custom hierarchy configurations using PowerShell cmdlets. This involves knowing and configuring the site details, user and device discovery, boundary configurations, and installation of various site roles. Site details First and foremost, get to know the Configuration Manager architecture details. You can use the Get-CMSite cmdlet to know the details of the Configuration Manager site. This cmdlet without any parameters will give the details of the site installed locally. To get the details of the remote site, you are required to give the site name or the site code of the remote site: Get-CMSite Get-CMSIte –SiteName "India Site" Get-CMSite –SiteCode P01 Discovery details It is important to get the discovery details before proceeding, as it decides the computer and the users that Configuration Manager will manage. PowerShell provides the Get-CMDiscoveryMethod cmdlet to get complete details of the discovery information. You can pass the discovery method as a parameter to the cmdlet to get the complete details of that discovery method. Additionally, you can also specify the site code as a parameter to the cmdlet to constrain the output of that particular site. In the following example, we are trying to get the information of HeartBeatDiscovery and we are restricting our search to the P01 site: Get-CMDiscoveryMethod –Name HeartBeatDiscovery –SiteCode P01 We can also pass other discovery methods as parameters to this cmdlet. Instead of HeartBeatDiscovery, you can use any of the following methods: ActiveDirectoryForestDiscovery ActiveDirectoryGroupDiscovery ActiveDirectorySystemDiscovery ActiveDirectoryUserDiscovery NetworkDiscovery Boundary details One of the first and most important and things to be configured in Configuration Manager are the boundary settings. Once the discovery is enabled, we are required to create a boundary and link it with the boundary group to manage clients through Configuration Manager. PowerShell provides inbuilt cmdlets to get information of the configured boundaries and boundary groups. We also have the cmdlets to create and configure new boundaries. You can use Get-CMBoundary to fetch the details of boundaries configured in Configuration Manager. PowerShell will also leverage you to use the Format-List attribute with the * (asterisk) wild character as the parameter value to get the detailed information of each boundary. As default, this cmdlet will return and give you the available boundaries configured in Configuration Manager. This cmdlet will also accept parameters, such as the boundary name, which will give the information of a specified boundary. You can even specify the boundary group name as the parameter, which will return the boundary specified by the associated boundary group. You can also specify the boundary ID as a parameter for this cmdlet: Get-CMBoundary –Name "Test Boundary" Get-CMBoundary –BoundaryGroup "Test Boundary Group" Get-CMBoundary –ID 12587459 Similarly, we can use Get-CMBoundaryGroup to view the details of all the boundary groups created and configured on the console. Using the cmdlet with no parameters will result in the listing of all the boundary groups available in the console. You can use the boundary group name or ID as a parameter to get the information of the interested boundary group: Get-CMBoundaryGroup Get-CMBoundaryGroup -Name "Test Boundary Group" Get-CMBoundaryGroup –ID "1259843" You can also get the information of multiple boundary groups by supplying the list as a parameter to the cmdlet: Get-CMBoundaryGroup –Name "TestBG1", "TestBG2", "TestBG3", "TestBG4" Until now, we saw how to read boundary and boundary-related details using PowerShell cmdlets. Now, let's see how to create our custom boundary in Configuration Manager using PowerShell cmdlets. Just like you create boundaries in console, PowerShell provides the New-CMBoundary cmdlet to create boundaries using PowerShell. At the minimum, we are required to provide the boundary name, boundary type, and value as a parameter to the cmdlet. We can create boundaries based on different criteria, such as the Active Directory site, IP subnet, IP range, and IPv6 prefix. PowerShell allows us to specify the criteria based on which we want to create a boundary in the boundary type parameter. The following examples show you all four ways to create boundaries. The boundary type to be used is decided based on the architecture and the requirement: New-CMBoundary –DisplayName "IPRange Boundary" –BoundaryType IPRange –Value "192.168.50.1-192.168.50.99" New-CMBoundary –DisplayName "ADSite Boundary" –BoundaryType ADSite –Value "Default-First-Site-Name" New-CMBoundary –DisplayName "IPSubnet Boundary" –BoundaryType IPSubnet –Value "192.168.50.0/24" New-CMBoundary –DisplayName "IPV6 Boundary" –BoundaryType IPv6Prefix –Value "FE80::/64" With the introduction of the boundary group concept with Configuration Manager 2012, it is expected that every boundary created should be made a part of a boundary group before it starts managing the clients. So, we first need to create a boundary group (if not present) and then add the boundary to the boundary group. We can use the New-CMBoundaryGroup cmdlet to create a new Configuration Manager boundary group. At the minimum, we are required to pass the boundary group name as a parameter, but also it is recommended that you pass the boundary description as the parameter: New-CMBoundaryGroup –Name "Test Boundary Group" –Description "Test boundary group created from PowerShell for testing" Upon successful execution, the command will create a boundary group named Test Boundary Group. We will now add our newly created boundary to this newly created boundary group. PowerShell provides an Add-CMBoundaryToGroup cmdlet to add the existing boundary to the existing boundary group: Add-CMBoundaryToGroup –BoundaryName "IPRange Boundary" –BoundaryGroupName "Test Boundary Group" This will add the IPRange Boundary boundary to the Test Boundary Group boundary group. You can use looping to add multiple boundaries to the boundary group in a real-time scenario. We can remove a boundary from Configuration Manger using the Remove-CMBoundary cmdlet. We can just specify the name or ID of the boundary to be deleted as a parameter to the cmdlet: Remove-CMBoundary –Name "Test Boundary" -force Distribution point details The details of the distribution points are one of the most common requirements, and it is essential that the Configuration Manager admin knows the distribution points configured in the environment to plan and execute any deployments. We can do this either using the traditional way of logging in to the console or by using the PowerShell approach. PowerShell provides the Get-CMDistributionPoint cmdlet to get the list of distribution points configured. Distribution points in Configuration Manager are used to store files, such as software packages, update packages, operating system deployment related packages, and so on. If no parameters are specified, this cmdlet will list down all the distribution points available. You can pass the site server system name and site code as parameters to filter the result, which will restrict the results to the specified site: Get-CMDistributionPoint –SiteSystemServerName "SCCMP01.Guru.Com" –SiteCode "P01" Here is a quick look of how to create and manage distribution points in Configuration Manager through PowerShell. We can create and manage the distribution point site system role in Configuration Manager through PowerShell just as we did using the console. To do this, we first need to create a site system server on the site (if not available), which we can later be upgraded as the site distribution point. We can do this using the New-CMSiteSystemServer cmdlet: New-CMSiteSystemServer –sitecode "P01" –UseSiteServerAccount –ServerName "dp.guru.com" This will use the site server account for the creation of the new site system. Next, we will configure this site system as a distribution point. We can do this by using the Add-CMDistrubutionPoint cmdlet: Add-CMDistributionPoint –SiteCode "P01" –SiteSystemServerName "dp.guru.com" –MinimumFreeSpaceMB 500 –CertificateExpirationTimeUtc "2020/12/30" –MinimumFreeSpaceMB 500 This will create dp.guru.com as a distribution point and also reserve 500 MB of space. We can also enable IIS and PXE support for the distribution point. We can also configure DP to respond to the incoming PXE requests with the following parameters. It just needs an extra effort to pass a few more parameters for the Distribution Point creation cmdlet: Add-CMDistributionPoint –SiteCode "P01" –SiteSystemServerName "dp.guru.com" –MinimumFreeSpaceMB 500 –InstallInternetServer –EnablePXESupport –AllowRespondIncomingPXERequest –CertificateExpirationTimeUtc "2020/12/30" We can create the distribution point group (if not already present) for the effective management of distribution point managements available in the environment using the New-CMDistributionPointGroup cmdlet with the minimum distribution point name as the parameter: New-CMDistributionPointGroup –Name "Test Distribution Group" With the distribution point group created, we can add the newly created distribution point to the distribution point group. You can use the Add-CMDistributionPointToGroup cmdlet with the distribution point name and distribution point group name, at the minimum, as parameters: Add-CMDistributionPointToGroup –DistributionPointName "dp.guru.com" –DistributionPointGroupName "Test Distribution Group" We can also add any device collection to the newly created distribution point group so that whenever we deploy items (such as packages, programs, and so on) to the device collection, the content will be auto distributed to the distribution group: Add-CMDeviceCollectionToDistributionPointGroup –DeviceCollectionName "TestCollection1" –DistributionPointGroupName "Test Distribution Group" Management point details The management point provides polices and service location information to the client. It also receives data from clients and processes and stores it in the database. PowerShell provides the Get-CMManagementPoint cmdlet to get the details of the management point. Optionally, you can provide the site system server name and the site code as the parameter to the cmdlet. The following example will fetch the management points associated with the SCCMP01.Guru.Com site system that has the site code P01: Get-CMManagementPoint –SiteSystemServerName "SCCMP01.Guru.Com" –SiteCode "P01" When you install CAS or the primary server using the default settings, the distribution points and management points will be automatically installed. However, if you want to add an additional management point, you can add the role from the server or through the PowerShell console. PowerShell provides the Add-CMManagementPoint cmdlet to add a new management point to the site. At the minimum, we are required to provide the site server name that we designated as the management point, database name, site code, the SQL server name, and the SQL instance name. The following example depicts how to create a management point through PowerShell: Add-CMManagementPoint –SiteSystemServerName "MP1.Guru.Com" –SiteCode "P01" –SQLServerFqDn "SQL.Guru.Com" -SQLServerInstanceName "SCCMP01" –DataBaseName "SCCM" –ClientConnectionType InternetAndIntranet –AllowDevice –GenerateAlert -EnableSsl We can use the Set-CMManagementPoint cmdlet to change any management point settings that are already created. The following example changes the GenerateAlert property to false: Set-CMManagementPoint –SiteSystemServerName "MP1.Guru.Com" –SiteCode "P01" –GenerateAlert:$False Other site role details Like distribution points and management points, we can get the detailed information of all other site roles (if they are installed and configured in the Configuration Manager environment). The following command snippet lists the different cmdlets available and their usage to get the details of different roles: Get-CMApplicationCatalogWebServicePoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMApplicationCatalogWebsitePoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMEnrollmentPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMEnrollmentProxyPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMFallbackStatusPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMSystemHealthValidatorPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Asset and compliance This section will mainly concentrates on gathering information and how to get details of devices, users, compliance settings, alerts, and so on. It also demonstrates how to create custom collections, add special configurations to collections, create custom client settings, install client agents, approve agents, and so on. Collection details Getting the collection details from PowerShell is as easy as using the console to get the details. You can use the Get-CMDeviceCollection cmdlet to get the details of the available collection. We can use the basics by using Format-Table with the autosize parameter to get the neat view: Get-CMDeviceCollection | Format-Table –AutoSize We can also use the grid view to get the details popped out as a grid. This will give us a nice grid that we can scroll and sort easily: Get-CMDeviceCollection | Out-GridView We can use Name or CollectionID as the parameter to get the information of a particular collection: Get-CMDeviceCollection –Name "All Windows-7 Devices" Get-CMDeviceCollection –CollectionId"2225000D" You can also specify the distribution point group name as the parameter to get the list of the collection that is associated with the specified distribution point group: Get-CMDeviceCollection –DistributionPointGroupName "Test Distribution Point Group" You can also replace the DistributionPointGroupName parameter with DistributionPointGroupID to pass the distribution point ID as a parameter to the cmdlet. Similarly, you can use the Get-CMUserCollection cmdlet to get the details of the available user collection in SCCM: Get-CMUserCollection | Format-Table –AutoSize It is also possible to read direct members of any existing collection. PowerShell provides cmdlets to read the direct membership of both the device and user collection. We can use Get-CMDeviceCollectionDirectMembershipRule and Get-CMuserCollectionDirectMembershipRule to read the direct members of the device and user collection respectively: CMDeviceCollectionDirectMembershipRule – CollectionName "Test Device Collection" –ResourceID "45647936" Get-CMUserCollectionDirectMembershipRule –CollectionName "Test User Collection" –ResourceID 99845361 Similarly, PowerShell also empowers us to get the query membership rule by using the Get-CMDevicecollectionQueryMembershipRule and Get-CMUsercollectionQueryMembershipRule cmdlets for the device and user collections respectively. The collection name and rule name needs to be specified as parameters to the cmdlet. The following example assumes that there is already a collection named All Windows-7 Machines associated with the Windows-7 Machines rule name and an All Domain Users user collection associated with the Domain Users query rule: Get-CMDeviceCollectionQueryMembershipRule –CollectionName "All Windows-7 Machines" –RuleName "Windows 7 Machines" Get-CMUsercollectionQueryMembershipRule –CollectionName "All Domain Users" –RuleName "Domain Users" Reading Configuration Manager status messages We can get status messages from one or more Configuration Manager site system components. A status message includes information of success, failure, and warning messages of the site system components. We can use the Get-CMSiteStatusMessage cmdlet to get the status messages. At the minimum, we are required to provide the start time to display the messages: Get-CMSiteStatusMessage –ViewingPeriod "2015/02/20 10:12:05" We can also include a few optional parameters that can help us to filter the output according to our requirement. Most importantly, we can use the computer name, message severity, and site code as additional parameters. For Severity, we can use the All, Error, Information, or Warning values: Get-CMSiteStatusMessage –ViewingPeriod "2014/08/17 10:12:05" –ComputerName XP1 –Severity All SiteCode P01 So, now we are clear on how to extract collection information from Configuration Manager using PowerShell. Let's now start creating our own collection using PowerShell. Summary In this article you have learned how to use PowerShell to get the basic details of the Configuration Manager environment. Resources for Article: Further resources on this subject: Managing Microsoft Cloud [article] So, what is PowerShell 3.0 WMI? [article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 15141

article-image-creating-random-insults
Packt
28 Apr 2015
21 min read
Save for later

Creating Random Insults

Packt
28 Apr 2015
21 min read
In this article by Daniel Bates, the author of Raspberry Pi for Kids - Second edition, we're going to learn and use the Python programming language to generate random funny phrases such as, Alice has a smelly foot! (For more resources related to this topic, see here.) Python In this article, we are going to use the Python programming language. Almost all programming languages are capable of doing the same things, but they are usually designed with different specializations. Some languages are designed to perform one job particularly well, some are designed to run code as fast as possible, and some are designed to be easy to learn. Scratch was designed to develop animations and games, and to be easy to read and learn, but it can be difficult to manage large programs. Python is designed to be a good general-purpose language. It is easy to read and can run code much faster than Scratch. Python is a text-based language. Using it, we type the code rather than arrange building blocks. This makes it easier to go back and change the pieces of code that we have already written, and it allows us to write complex pieces of code more quickly. It does mean that we need to type our programs accurately, though—there are no limits to what we can type, but not all text will form a valid program. Even a simple spelling mistake can result in errors. Lots of tutorials and information about the available features are provided online at http://docs.python.org/2/. Learn Python the Hard Way, by Shaw Zed A., is another good learning resource, which is available at http://learnpythonthehardway.org. As an example, let's take a look at some Scratch and Python code, respectively, both of which do the same thing. Here's the Scratch code: The Python code that does the same job looks like: def count(maximum):    value = 0    while value < maximum:        value = value + 1        print "value =", value   count(5) Even if you've never seen any Python code before, you might be able to read it and tell what it does. Both the Scratch and Python code count from 0 to a maximum value, and display the value each time. The biggest difference is in the first line. Instead of waiting for a message, we define (or create) a function, and instead of sending a message, we call the function (more on how to run Python code, shortly). Notice that we include maximum as an argument to the count function. This tells Python the particular value we would like to keep as the maximum, so we can use the same code with different maximum values. The other main differences are that we have while instead of forever if, and we have print instead of say. These are just different ways of writing the same thing. Also, instead of having a block of code wrap around other blocks, we simply put an extra four spaces at the beginning of a line to show which code is contained within a particular block. Python programming To run a piece of Python code, open Python 2 from the Programming menu on the Raspberry Pi desktop and perform the following steps: Type the previous code into the window and you should notice that it can recognize how many spaces to start a line with. When you have finished the function block, press Enter a couple of times, until you see >>>. This shows that Python recognizes that your block of code has been completed, and that it is ready to receive a new command. Now, you can run your code by typing in count(5) and pressing Enter. You can change 5 to any number you like and press Enter again to count to a different number. We're now ready to create our program! The Raspberry Pi also supports Python 3, which is very similar but incompatible with Python 2. You can check out the differences between Python 2 and Python 3 at http://python-future.org/compatible_idioms.html. The program we're going to use to generate phrases As mentioned earlier, our program is going to generate random, possibly funny, phrases for us. To do this, we're going to give each phrase a common structure, and randomize the word that appears in each position. Each phrase will look like: <name> has a <adjective> <noun> Where <name> is replaced by a person's name, <adjective> is replaced by a descriptive word, and <noun> is replaced by the name of an object. This program is going to be a little larger than our previous code example, so we're going to want to save it and modify it easily. Navigate to File | New Window in Python 2. A second window will appear which starts off completely blank. We will write our code in this window, and when we run it, the results will appear in the first window. For the rest of the article, I will call the first window the Shell, and the new window the Code Editor. Remember to save your code regularly! Lists We're going to use a few different lists in our program. Lists are an important part of Python, and allow us to group together similar things. In our program, we want to have separate lists for all the possible names, adjectives, and nouns that can be used in our sentences. We can create a list in this manner: names = ["Alice", "Bob", "Carol"] Here, we have created a variable called names, which is a list. The list holds three items or elements: Alice, Bob, and Carol. We know that it is a list because the elements are surrounded by square brackets, and are separated by commas. The names need to be in quote marks to show that they are text, and not the names of variables elsewhere in the program. To access the elements in a list, we use the number which matches its position, but curiously, we start counting from zero. This is because if we know where the start of the list is stored, we know that its first element is stored at position start + 0, the second element is at position start + 1, and so on. So, Alice is at position 0 in the list, Bob is at position 1, and Carol is at position 2. We use the following code to display the first element (Alice) on the screen: print names[0] We've seen print before: it displays text on the screen. The rest of the code is the name of our list (names), and the position of the element in the list that we want surrounded by square brackets. Type these two lines of code into the Code Editor, and then navigate to Run | Run Module (or press F5). You should see Alice appear in the Shell. Feel free to play around with the names in the list or the position that is being accessed until you are comfortable with how lists work. You will need to rerun the code after each change. What happens if you choose a position that doesn't match any element in the list, such as 10? Adding randomness So far, we have complete control over which name is displayed. Let's now work on displaying a random name each time we run the program. Update your code in the Code Editor so it looks like: import random names = ["Alice", "Bob", "Carol"] position = random.randrange(3) print names[position] In the first line of the code, we import the random module. Python comes with a huge amount of code that other people have written for us, separated into different modules. Some of this code is simple, but makes life more convenient for us, and some of it is complex, allowing us to reuse other people's solutions for the challenges we face and concentrate on exactly what we want to do. In this case, we are making use of a collection of functions that deal with random behavior. We must import a module before we are able to access its contents. Information on the available modules available can be found online at www.python.org/doc/. After we've created the list of names, we then compute a random position in the list to access. The name random.randrange tells us that we are using a function called randrange, which can be found inside the random module that we imported earlier. The randrange function gives us a random whole number less than the number we provide. In this case, we provide 3 because the list has three elements and we store the random position in a new variable called position. Finally, instead of accessing a fixed element in the names list, we access the element that position refers to. If you run this code a few times, you should notice that different names are chosen randomly. Now, what happens if we want to add a fourth name, Dave, to our list? We need to update the list itself, but we also need to update the value we provide to randrange to let it know that it can give us larger numbers. Making multiple changes just to add one name can cause problems—if the program is much larger, we may forget which parts of the code need to be updated. Luckily, Python has a nice feature which allows us to make this simpler. Instead of a fixed number (such as 3), we can ask Python for the length of a list, and provide that to the randrange function. Then, whenever we update the list, Python knows exactly how long it is, and can generate suitable random numbers. Here is the code, which is updated to make it easier to change the length of the list: import random names = ["Alice", "Bob", "Carol"] length = len(names) position = random.randrange(length) print names[position] Here, we've created a new variable called length to hold the length of the list. We then use the len function (which is short for length) to compute the length of our list, and we give length to the randrange function. If you run this code, you should see that it works exactly as it did before, and it easily copes if you add or remove elements from the list. It turns out that this is such a common thing to do, that the writers of the random module have provided a function which does the same job. We can use this to simplify our code: import random names = ["Alice", "Bob", "Carol", "Dave"] print random.choice(names) As you can see, we no longer need to compute the length of the list or a random position in it: random.choice does all of this for us, and simply gives us a random element of any list we provide it with. As we will see in the next section, this is useful since we can reuse random.choice for all the different lists we want to include in our program. If you run this program, you will see that it works the same as it did before, despite being much shorter. Creating phrases Now that we can get a random element from a list, we've crossed the halfway mark to generating random sentences! Create two more lists in your program, one called adjectives, and the other called nouns. Put as many descriptive words as you like into the first one, and a selection of objects into the second. Here are the three lists I now have in my program: names = ["Alice", "Bob", "Carol", "Dave"] adjectives = ["fast", "slow", "pretty", "smelly"] nouns = ["dog", "car", "face", "foot"] Also, instead of printing our random elements immediately, let's store them in variables so that we can put them all together at the end. Remove the existing line of code with print in it, and add the following three lines after the lists have been created: name = random.choice(names) adjective = random.choice(adjectives) noun = random.choice(nouns) Now, we just need to put everything together to create a sentence. Add this line of code right at the end of the program: print name, "has a", adjective, noun Here, we've used commas to separate all of the things we want to display. The name, adjective, and noun are our variables holding the random elements of each of the lists, and "has a" is some extra text that completes the sentence. print will automatically put a space between each thing it displays (and start a new line at the end). If you ever want to prevent Python from adding a space between two items, separate them with + rather than a comma. That's it! If you run the program, you should see random phrases being displayed each time, such as Alice has a smelly foot or Carol has a fast car. Making mischief So, we have random phrases being displayed, but what if we now want to make them less random? What if you want to show your program to a friend, but make sure that it only ever says nice things about you, or bad things about them? In this section, we'll extend the program to do just that. Dictionaries The first thing we're going to do is replace one of our lists with a dictionary. A dictionary in Python uses one piece of information (a number, some text, or almost anything else) to search for another. This is a lot like the dictionaries you might be used to, where you use a word to search for its meaning. In Python, we say that we use a key to look for a value. We're going to turn our adjectives list into a dictionary. The keys will be the existing descriptive words, and the values will be tags that tell us what sort of descriptive words they are. Each adjective will be "good" or "bad". My adjectives list becomes the following dictionary. Make similar changes to yours. adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} As you can see, the square brackets from the list become curly braces when you create a dictionary. The elements are still separated by commas, but now each element is a key-value pair with the adjective first, then a colon, and then the type of adjective it is. To access a value in a dictionary, we no longer use the number which matches its position. Instead, we use the key with which it is paired. So, as an example, the following code will display "good" because "pretty" is paired with "good" in the adjectives dictionary: print adjectives["pretty"] If you try to run your program now, you'll get an error which mentions random.choice(adjectives). This is because random.choice expects to be given a list, but is now being given a dictionary. To get the code working as it was before, replace that line of code with this: adjective = random.choice(adjectives.keys()) The addition of .keys() means that we only look at the keys in the dictionary—these are the adjectives we were using before, so the code should work as it did previously. Test it out now to make sure. Loops You may remember the forever and repeat code blocks in Scratch. In this section, we're going to use Python's versions of these to repeatedly choose random items from our dictionary until we find one which is tagged as "good". A loop is the general programming term for this repetition—if you walk around a loop, you will repeat the same path over and over again, and it is the same with loops in programming languages. Here is some code, which finds an adjective and is tagged as "good". Replace your existing adjective = line of code with these lines: while True:    adjective = random.choice(adjectives.keys())    if adjectives[adjective] == "good":        break The first line creates our loop. It contains the while key word, and a test to see whether the code should be executed inside the loop. In this case, we make the test True, so it always passes, and we always execute the code inside. We end the line with a colon to show that this is the beginning of a block of code. While in Scratch we could drag code blocks inside of the forever or repeat blocks, in Python we need to show which code is inside the block in a different way. First, we put a colon at the end of the line, and then we indent any code which we want to repeat by four spaces. The second line is the code we had before: we choose a random adjective from our dictionary. The third line uses adjectives[adjective] to look into the (adjectives) dictionary for the tag of our chosen adjective. We compare the tag with "good" using the double = sign (a double = is needed to make the comparison different from the single = case, which stores a value in a variable). Finally, if the tag matches "good",we enter another block of code: we put a colon at the end of the line, and the following code is indented by another four spaces. This behaves the same way as the Scratch if block. The fourth line contains a single word: break. This is used to escape from loops, which is what we want to do now that we have found a "good" adjective. If you run your code a few times now, you should see that none of the bad adjectives ever appear. Conditionals In the preceding section, we saw a simple use of the if statement to control when some code was executed. Now, we're going to do something a little more complex. Let's say we want to give Alice a good adjective, but give Bob a bad adjective. For everyone else, we don't mind if their adjective is good or bad. The code we already have to choose an adjective is perfect for Alice: we always want a good adjective. We just need to make sure that it only runs if our random phrase generator has chosen Alice as its random person. To do this, we need to put all the code for choosing an adjective within another if statement, as shown here: if name == "Alice":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "good":            break Remember to indent everything inside the if statement by an extra four spaces. Next, we want a very similar piece of code for Bob, but also want to make sure that the adjective is bad: elif name == "Bob":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "bad":           break The only differences between this and Alice's code is that the name has changed to "Bob", the target tag has changed to "bad", and if has changed to elif. The word elif in the code is short for else if. We use this version because we only want to do this test if the first test (with Alice) fails. This makes a lot of sense if we look at the code as a whole: if our random person is Alice, do something, else if our random person is Bob, do something else. Finally, we want some code that can deal with everyone else. This time, we don't want to perform another test, so we don't need an if statement: we can just use else: else:    adjective = random.choice(adjectives.keys()) With this, our program does everything we wanted it to do. It generates random phrases, and we can even customize what sort of phrase each person gets. You can add as many extra elif blocks to your program as you like, so as to customize it for different people. Functions In this section, we're not going to change the behavior of our program at all; we're just going to tidy it up a bit. You may have noticed that when customizing the types of adjectives for different people, you created multiple sections of code, which were almost identical. This isn't a very good way of programming because if we ever want to change the way we choose adjectives, we will have to do it multiple times, and this makes it much easier to make mistakes or forget to make a change somewhere. What we want is a single piece of code, which does the job we want it to do, and then be able to use it multiple times. We call this piece of code a function. We saw an example of a function being created in the comparison with Scratch at the beginning of this article, and we've used a few functions from the random module already. A function can take some inputs (called arguments) and does some computation with them to produce a result, which it returns. Here is a function which chooses an adjective for us with a given tag: def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item In the first line, we use def to say that we are defining a new function. We also give the function's name and the names of its arguments in brackets. We separate the arguments by commas if there is more than one of them. At the end of the line, we have a colon to show that we are entering a new code block, and the rest of the code in the function is indented by four spaces. The next four lines should look very familiar to you—they are almost identical to the code we had before. The only difference is that instead of comparing with "good" or "bad", we compare with the tag argument. When we use this function, we will set tag to an appropriate value. The final line returns the suitable adjective we've found. Pay attention to its indentation. The line of code is inside the function, but not inside the while loop (we don't want to return every item we check), so it is only indented by four spaces in total. Type the code for this function anywhere above the existing code, which chooses the adjective; the function needs to exist in the code prior to the place where we use it. In particular, in Python, we tend to place our code in the following order: Imports Functions Variables Rest of the code This allows us to use our functions when creating the variables. So, place your function just after the import statement, but before the lists. We can now use this function instead of the several lines of code that we were using before. The code I'm going to use to choose the adjective now becomes: if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys()) This looks much neater! Now, if we ever want to change how an adjective is chosen, we just need to change the chooseAdjective function, and the change will be seen in every part of the code where the function is used. Complete code listing Here is the final code you should have when you have completed this article. You can use this code listing to check that you have everything in the right order, or look for other problems in your code. Of course, you are free to change the contents of the lists and dictionaries to whatever you like; this is only an example: import random   def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item   names = ["Alice", "Bob", "Carol", "Dave"] adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} nouns = ["dog", "car", "face", "foot"]   name = random.choice(names) #adjective = random.choice(adjectives) noun = random.choice(nouns)   if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys())   print name, "has a", adjective, noun Summary In this article, we learned about the Python programming language and how it can be used to create random phrases. We saw that it shared lots of features with Scratch, but is simply presented differently. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] GPS-enabled Time-lapse Recorder [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 13476
article-image-writing-simple-behaviors
Packt
27 Apr 2015
18 min read
Save for later

Writing Simple Behaviors

Packt
27 Apr 2015
18 min read
In this article by Richard Sneyd, the author of Stencyl Essentials, we will learn about Stencyl's signature visual programming interface to create logic and interaction in our game. We create this logic using a WYSIWYG (What You See Is What You Get) block snapping interface. By the end of this article, you will have the Player Character whizzing down the screen, in pursuit of a zigzagging air balloon! Some of the things we will learn to do in this article are as follows: Create Actor Behaviors, and attach them to Actor Types. Add Events to our Behaviors. Use If blocks to create branching, conditional logic to handle various states within our game. Accept and react to input from the player. Apply physical forces to Actors in real-time. One of the great things about this visual approach to programming is that it largely removes the unpleasantness of dealing with syntax (the rules of the programming language), and the inevitable errors that come with it, when we're creating logic for our game. That frees us to focus on the things that matter most in our games: smooth, well wrought game mechanics and enjoyable, well crafted game-play. (For more resources related to this topic, see here.) The Player Handler The first behavior we are going to create is the Player Handler. This behavior will be attached to the Player Character (PC), which exists in the form of the Cowboy Actor Type. This behavior will be used to handle much of the game logic, and will process the lion's share of player input. Creating a new Actor Behavior It's time to create our very first behavior! Go to the Dashboard, under the LOGIC heading, select Actor Behaviors: Click on This game contains no Logic. Click here to create one. to add your first behavior. You should see the Create New... window appear: Enter the Name Player Handler, as shown in the previous screenshot, then click Create. You will be taken to the Behavior Designer: Let's take a moment to examine the various areas within the Behavior Designer. From left to right, as demonstrated in the previous screenshot, we have: The Events Pane: Here we can add, remove, and move between events in our Behavior. The Canvas: To the center of the screen, the Canvas is where we drag blocks around to click our game logic together. The blocks Palette: This is where we can find any and all of the various logic blocks that Stencyl has on offer. Simply browse to your category of choice, then click and drag the block onto the Canvas to configure it. Follow these steps: Click the Add Event button, which can be found at the very top of the Events Pane. In the menu that ensues, browse to Basics and click When Updating: You will notice that we now have an Event in our Events Pane, called Updated, along with a block called always on our Canvas. In Stencyl events lingo, always is synonymous with When Updating: Since this is the only event in our Behavior at this time, it will be selected by default. The always block (yellow with a red flag) is where we put the game logic that needs to be checked on a constant basis, for every update of the game loop (this will be commensurate with the framerate at runtime, around 60fps, depending on the game performance and system specs). Before we proceed with the creation of our conditional logic, we must first create a few attributes. If you have a programming background, it is easiest to understand attributes as being synonymous to local variables. Just like variables, they have a set data type, and you can retrieve or change the value of an attribute in real-time. Creating Attributes Switch to the Attributes context in the blocks palette: There are currently no attributes associated with this behavior. Let's add some, as we'll need them to store important information of various types which we'll be using later on to craft the game mechanics. Click on the Create an Attribute button: In the Create an Attribute… window that appears, enter the Name Target Actor, set Type to Actor, check Hidden?, and press OK: Congratulations! If you look at the lower half of the blocks palette, you will see that you have added your first attribute, Target Actor, of type Actors, and it is now available for use in our code. Next, let's add five Boolean attributes. A Boolean is a special kind of attribute that can be set to either true, or false. Those are the only two values it can accept. First, let's create the Can Be Hurt Boolean: Click Create an Attribute…. Enter the Name Can Be Hurt. Change the Type to Boolean. Check Hidden?. Press OK to commit and add the attribute to the behavior. Repeat steps 1 through 5 for the remaining four Boolean attributes to be added, each time substituting the appropriate name:     Can Switch Anim     Draw Lasso     Lasso Back     Mouse Over If you have done this correctly, you should now see six attributes in your attributes list - one under Actor, and five under Boolean - as shown in the following screenshot: Now let's follow the same process to further create seven attributes; only this time, we'll set the Type for all of them to Number. The Name for each one will be: Health (Set to Hidden?). Impact Force (Set to Hidden?). Lasso Distance (Set to Hidden?). Max Health (Don't set to Hidden?). Turn Force (Don't set to Hidden?). X Point (Set to Hidden?). Y Point (Set to Hidden?). If all goes well, you should see your list of attributes update accordingly: We will add just one additional attribute. Click the Create an Attribute… button again: Name it Mouse State. Change its Type to Text. Do not hide this attribute. Click OK to commit and add the attribute to your behavior. Excellent work, at this point, you have created all of the attributes you will need for the Player Handler behavior! Custom events We need to create a few custom events in order to complete the code for this game prototype. For programmers, custom events are like functions that don't accept parameters. You simply trigger them at will to execute a reusable bunch of code. To accept parameters, you must create a custom block: Click Add Event. Browse to Advanced.. Select Custom Event: You will see that a second event, simply called Custom Event, has been added to our list of events: Now, double-click on the Custom Event in the events stack to change its label to Obj Click Check (For readability purposes, this does not affect the event's name in code, and is completely ignored by Stencyl): Now, let's set the name as it will be used in code. Click between When and happens, and insert the name ObjectClickCheck: From now on, whenever we want to call this custom event in our code, we will refer to it as ObjectClickCheck. Go back to the When Updating event by selecting it from the events stack on the left. We are going to add a special block to this event, which calls the custom event we created just a moment ago: In the blocks palette, go to Behaviour | Triggers | For Actor, then click and drag the highlighted block onto the canvas: Drop the selected block inside of the Always block, and fill in the fields as shown (please note that I have deliberately excluded the space between Player and Handler in the behavior name, so as to demonstrate the debugging workflow. This will be corrected in a later part of the article): Now, ObjectClickCheck will be executed for every iteration of the game loop! It is usually a good practice to split up your code like this, rather than having it all in one really long event. That would be confusing, and terribly hard to sift through when behaviors become more complex! Here is a chance to assess what you have learnt from this article thus far. We will create a second custom event; see if you can achieve this goal using only the skeleton guide mentioned next. If you struggle, simply refer back to the detailed steps we followed for the ObjectClickCheck event: Click Add Event | Advanced | Custom Event. A new event will appear at the bottom of the events pane. Double Click on the event in the events pane to change its label to Handle Dir Clicks for readability purposes. Between When and happens, enter the name HandleDirectionClicks. This is the handle we will use to refer to this event in code. Go back to the Updated event, right click on the Trigger event in behavior for self block that is already in the always block, and select copy from the menu. Right-click anywhere on the canvas and select paste from the menu to create an exact duplicate of the block. Change the event being triggered from ObjectClickCheck to HandleDirectionClicks. Keep the value PlayerHandler for the behavior field. Drag and drop the new block so that it sits immediately under the original. Holding Alt on the keyboard, and clicking and dragging on a block, creates a duplicate of that block. Were you successful? If so, you should see these changes and additions in your behavior (note that the order of the events in the events pane does not affect the game logic, or the order in which code is executed). Learning to create and utilize custom events in Stencyl is a huge step towards mastering the tool, so congratulations on having come this far! Testing and debugging As with all fields of programming and software development, it is important to periodically and iteratively test your code. It's much easier to catch and repair mistakes this way. On that note, let's test the code we've written so far, using print blocks. Browse to and select Flow | Debug | print from the blocks palette: Now, drag a copy of this block into both of your custom events, snapping it neatly into the when happens blocks as you do so. For the ObjectClickCheck event, type Object Click Detected into the print block For the HandleDirectionClicks event, type Directional Click Detected into the print block. We are almost ready to test our code. Since this is an Actor Behavior, however, and we have not yet attached it to our Cowboy actor, nothing would happen yet if we ran the code. We must also add an instance of the Cowboy actor to our scene: Click the Attach to Actor Type button to the top right of the blocks palette: Choose the Cowboy Actor from the ensuing list, and click OK to commit. Go back to the Dashboard, and open up the Level 1 scene. In the Palette to the right, switch from Tiles to Actors, and select the Cowboy actor: Ensure Layer 0 is selected (as actors cannot be placed on background layers). Click on the canvas to place an instance of the actor in the scene, then click on the Inspector, and change the x and y Scale of the actor to 0.8: Well done! You've just added your first behavior to an Actor Type, and added your first Actor Instance to a scene! We are now ready to test our code. First, Click the Log Viewer button on the toolbar: This will launch the Log Viewer. The Log Viewer will open up, at which point we need only set Platform to Flash (Player), and click the Test Game Button to compile and execute our code: After a few moments, if you have followed all of the steps correctly, you will see that the game windows opens on the screen and a number of events appear on the Log Viewer. However, none of these events have anything to do with the print blocks we added to our custom events. Hence, something has gone wrong, and must be debugged. What could it be? Well, since the blocks simply are not executing, it's likely a typo of some kind. Let's look at the Player Handler again, and you'll see that within the Updated event, we've referred to the behavior name as PlayerHandler in both trigger event blocks, with no space inserted between the words Player and Handler: Update both of these fields to Player Handler, and be sure to include the space this time, so that it looks like the following (To avoid a recurrence of this error, you may wish to use the dropdown menu by clicking the downwards pointing grey arrow, then selecting Behavior Names to choose your behavior from a comprehensive list): Great work! You have successfully completed your first bit of debugging in Stencyl. Click the Test Game button again. After the game window has opened, if you scroll down to the bottom of the Log Viewer, you should see the following events piling up: These INFO events are being triggered by the print blocks we inserted into our custom events, and prove that our code is now working. Excellent job! Let's move on to a new Actor; prepare to meet Dastardly Dan! Adding the Balloon Let's add the balloon actor to our game, and insert it into Level 1: Go to the Dashboard, and select Actor Types from the RESOURCES menu. Press Click here to create a new Actor Type. Name it Balloon, and click Create. Click on This Actor Type contains no animations. Click here to add an animation. Change the text in the Name field to Default. Un-check looping?. Press the Click here to add a frame. button. The Import Frame from Image Strip window appears. Change the Scale to 4x. Click Choose Image... then browse to Game AssetsGraphicsActor Animations and select Balloon.png. Keep Columns and Rows set to 1, and click Add to commit this frame to the animation. All animations are created with a box collision shape by default. In actuality, the Balloon actor requires no collisions at all, so let's remove it. Go to the Collision context, select the Default box, and press Delete on the keyboard: The Balloon Actor Type is now free of collision shapes, and hence will not interact physically with other elements of our game levels. Next, switch to the Physics context: Set the following attributes: Set What Kind of Actor Type? to Normal. Set Can Rotate? To No. This will disable all rotational physical forces and interactions. We can still rotate the actor by setting its rotation directly in the code, however. Set Affected by Gravity? to No. We will be handling the downward trajectory of this actor ourselves, without using the gravity implemented by the physics engine. Just before we add this new actor to Level 1, let's add a behavior or two. Switch to the Behaviors context: Then, follow these steps: This Actor Type currently has no attached behaviors. Click Add Behavior, at the bottom left hand corner of the screen: Under FROM YOUR LIBRARY, go to the Motion category, and select Always Simulate. The Always Simulate behavior will make this actor operational, even if it is not on the screen, which is a desirable result in this case. It also prevents Stencyl from deleting the actor when it leaves the scene, which it would automatically do in an effort to conserve memory, if we did not explicitly dictate otherwise. Click Choose to add it to the behaviors list for this Actor Type. You should see it appear in the list: Click Add Behavior again. This time, under FROM YOUR LIBRARY, go the Motion category once more, and this time select Wave Motion (you'll have to scroll down the list to see it). Click Choose to add it to the behavior stack. You should see it sitting under the Always Simulate behavior: Configuring prefab behaviors Prefab behaviors (also called shipped behaviors) enable us to implement some common functionality, without reinventing the wheel, so to speak. The great thing about these prefab behaviors, which can be found in the behavior library, is that they can be used as templates, and modified at will. Let's learn how to add and modify a couple of these prefab behaviors now. Some prefab behaviors have exposed attributes which can be configured to suit the needs of the project. The Wave Motion behavior is one such example. Select it from the stack, and configure the attributes as follows: Set Direction to Horizontal from the dropdown menu. Set Start Speed to 5. Set Amplitude to 64. Set Wavelength to 128. Fantastic! Now let's add an Instance of the Balloon actor to Level 1: Click the Add to Scene button at the top right corner of your view. Select the Level 1 scene. Select the Balloon. Click on the canvas, below the Cowboy actor, to place an instance of the Balloon in the scene: Modifying prefab behaviors Before we test the game one last time, we must quickly add a prefab behavior to the Cowboy Actor Type, modifying it slightly to suit the needs of this game (for instance, we will need to create an offset value for the y axis, so the PC is not always at the centre of the screen): Go to the Dashboard, and double click on the Cowboy from the Actor Types list. Switch to the Behavior Context. Click Add Behavior, as you did previously when adding prefab behaviors to the Balloon Actor Type. This time, under FROM YOUR LIBRARY, go to the Game category, and select Camera Follow. As the name suggests, this is a simple behavior that makes the camera follow the actor it is attached to. Click Choose to commit this behavior to the stack, and you should see this: Click the Edit Behavior button, and it will open up in the Behavior Designer: In the Behavior Designer, towards the bottom right corner of the screen, click on the Attributes tab: Once clicked, you will see a list of all the attributes in this behavior appear in the previous window. Click the Add Attribute button: Perform the following steps: Set the Name to Y Offset. Change the Type to Number. Leave the attribute unhidden. Click OK to commit new attribute to the attribute stack: We must modify the set IntendedCameraY block in both the Created and the Updated events: Holding Shift, click and drag the set IntendedCameraY block out onto the canvas by itself: Drag the y-center of Self block out like the following: Click the little downward pointing grey arrow at the right of the empty field in the set intendedCameraY block , and browse to Math | Arithmetic | Addition block: Drag the y-center of Self block into the left hand field of the Add block: Next, click the small downward pointing grey arrow to the right of the right hand field of the Addition block to bring up the same menu as before. This time, browse to Attributes, and select Y Offset: Now, right click on the whole block, and select Copy (this will copy it to the clipboard), then simply drag it back into its original position, just underneath set intendedCameraX: Switch to the Updated Event from the events pane on the left, hold Shift, then click and drag set intendedCameraY out of the Always block and drop it in the trash can, as you won't need it anymore. Right-click and select Paste to place a copy of the new block configuration you copied to the clipboard earlier: Click and drag the pasted block so that it appears just underneath the set intendedCameraX block, and save your changes: Testing the changes Go back to the Cowboy Actor Type, and open the Behavior context; click File | Reload Document (Ctrl-R or Cmd-R) to update all the changes. You should see a new configurable attribute for the Camera Follow Behavior, called Y Offset. Set its value to 70: Excellent! Now go back to the Dashboard and perform the following: Open up Level 1 again. Under Physics, set Gravity (Vertical) to 8.0. Click Test Game, and after a few moments, a new game window should appear. At this stage, what you should see is the Cowboy shooting down the hill with the camera following him, and the Balloon floating around above him. Summary In this article, we learned the basics of creating behaviors, adding and setting Attributes of various types, adding and modifying prefab behaviors, and even some rudimentary testing and debugging. Give yourself a pat on the back; you've learned a lot so far! Resources for Article: Further resources on this subject: Form Handling [article] Managing and Displaying Information [article] Background Animation [article]
Read more
  • 0
  • 0
  • 8067

article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 11700
Modal Close icon
Modal Close icon