Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
Packt
27 Apr 2015
9 min read
Save for later

Apache Solr and Big Data – integration with MongoDB

Packt
27 Apr 2015
9 min read
In this article by Hrishikesh Vijay Karambelkar, author of the book Scaling Big Data with Hadoop and Solr - Second Edition, we will go through Apache Solr and MongoDB together. In an enterprise, data is generated from all the software that is participating in day-to-day operations. This data has different formats, and bringing in this data for big-data processing requires a storage system that is flexible enough to accommodate a data with varying data models. A NoSQL database, by its design, is best suited for this kind of storage requirements. One of the primary objectives of NoSQL is horizontal scaling, that is, the P in CAP theorem, but this works at the cost of sacrificing Consistency or Availability. Visit http://en.wikipedia.org/wiki/CAP_theorem to understand more about CAP theorem (For more resources related to this topic, see here.) What is NoSQL and how is it related to Big Data? As we have seen, data models for NoSQL differ completely from that of a relational database. With the flexible data model, it becomes very easy for developers to quickly integrate with the NoSQL database, and bring in large sized data from different data sources. This makes the NoSQL database ideal for Big Data storage, since it demands that different data types be brought together under one umbrella. NoSQL also has different data models, like KV store, document store and Big Table storage. In addition to flexible schema, NoSQL offers scalability and high performance, which is again one of the most important factors to be considered while running big data. NoSQL was developed to be a distributed type of database. When traditional relational stores rely on the high computing power of CPUs and the high memory focused on a centralized system, NoSQL can run on your low-cost, commodity hardware. These servers can be added or removed dynamically from the cluster running NoSQL, making the NoSQL database easier to scale. NoSQL enables most advanced features of a database, like data partitioning, index sharding, distributed query, caching, and so on. Although NoSQL offers optimized storage for big data, it may not be able to replace the relational database. While relational databases offer transactional (ACID), high CRUD, data integrity, and a structured database design approach, which are required in many applications, NoSQL may not support them. Hence, it is most suited for Big Data where there is less possibility of need for data to be transactional. MongoDB at glance MongoDB is one of the popular NoSQL databases, (just like Cassandra). MongoDB supports the storing of any random schemas in the document oriented storage of its own. MongoDB supports the JSON-based information pipe for any communication with the server. This database is designed to work with heavy data. Today, many organizations are focusing on utilizing MongoDB for various enterprise applications. MongoDB provides high availability and load balancing. Each data unit is replicated and the combination of a data with its copes is called a replica set. Replicas in MongoDB can either be primary or secondary. Primary is the active replica, which is used for direct read-write operations, while the secondary replica works like a backup for the primary. MongoDB supports searches by field, range queries, and regular expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. Any field in a MongoDB document can be indexed. More information about MongoDB can be read at https://www.mongodb.org/. The data on MongoDB is eventually consistent. Apache Solr can be used to work with MongoDB, to enable database searching capabilities on a MongoDB-based data store. Unlike Cassandra, where the Solr indexes are stored directly in Cassandra through solandra, MongoDB integration with Solr brings in the indexes in the Solr-based optimized storage. There are various ways in which the data residing in MongoDB can be analyzed and searched. MongoDB's replication works by recording all operations made on a database in a log file, called the oplog (operation log). Mongo's oplog keeps a rolling record of all operations that modify the data stored in your databases. Many of the implementers suggest reading this log file using a standard file IO program to push the data directly to Apache Solr, using CURL, SolrJ. Since oplog is a collection of data with an upper limit on maximum storage, it is feasible to synch such querying with Apache Solr. Oplog also provides tailable cursors on the database. These cursors can provide a natural order to the documents loaded in MongoDB, thereby, preserving their order. However, we are going to look at a different approach. Let's look at the schematic following diagram: In this case, MongoDB is exposed as a database to Apache Solr through the custom database driver. Apache Solr reads MongoDB data through the DataImportHandler, which in turns calls the JDBC-based MongoDB driver for connecting to MongoDB and running data import utilities. Since MongoDB supports replica sets, it manages the distribution of data across nodes. It also supports Sharding just like Apache Solr. Installing MongoDB To install MongoDB in your development environment, please follow the following steps: Download the latest version of MongoDB from https://www.mongodb.org/downloads for your supported operating system. Unzip the zipped folder. MongoDB comes up with a default set of different command-line components and utilities:      bin/mongod: The database process.      bin/mongos: Sharding controller.      bin/mongo: The database shell (uses interactive JavaScript). Now, create a directory for MongoDB, which it will use for user data creation and management, and run the following command to start the single node server: $ bin/mongod –dbpath <path to your data directory> --rest In this case, --rest parameter enables support for simple rest APIs that can be used for getting the status. Once the server is started, access http://localhost:28017 from your favorite browser, you should be able to see following administration status page: Now that you have successfully installed MongoDB, try loading a sample data set from the book on MongoDB by opening a new command-line interface. Change the directory to $MONGODB_HOME and run the following command: $ bin/mongoimport --db solr-test --collection zips --file "<file-dir>/samples/zips.json" Please note that the database name is solr-test. You can see the stored data using the MongoDB-based CLI by running the following set of commands from your shell: $ bin/mongo MongoDB shell version: 2.4.9 connecting to: test Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see        http://docs.mongodb.org/ Questions? Try the support group        http://groups.google.com/group/mongodb-user > use test Switched to db test > show dbs exampledb       0.203125GB local   0.078125GB test   0.203125GB > db.zips.find({city:"ACMAR"}) { "city" : "ACMAR", "loc" : [ -86.51557, 33.584132 ], "pop" : 6055, "state" :"AL", "_id" : "35004" } Congratulations! MongoDB is installed successfully Creating Solr indexes from MongoDB To run MongoDB as a database, you will need a JDBC driver built for MongoDB. However, the Mongo-JDBC driver has certain limitations, and it does not work with the Apache Solr DataImportHandler. So, I have extended Mongo-JDBC to work under the Solr-based DataImportHandler. The project repository is available at https://github.com/hrishik/solr-mongodb-dih. Let's look at the setting-up procedure for enabling MongoDB based Solr integration: You may not require a complete package from the solr-mongodb-dih repository, but just the jar file. This can be downloaded from https://github.com/hrishik/solr-mongodb-dih/tree/master/sample-jar. You will also need the following additional jar files:      jsqlparser.jar      mongo.jar These jars are available with the book Scaling Big Data with Hadoop and Solr, Second Edition for download. In your Solr setup, copy these jar files into the library path, that is, the $SOLR_WAR_LOCATION/WEB-INF/lib folder. Alternatively, point your container classpath variable to link them up. Using simple Java source code DataLoad.java (link https://github.com/hrishik/solr-mongodb-dih/blob/master/examples/DataLoad.java), populate the database with some sample schema and tables that you will use to load in Apache Solr. Now create a data source file (data-source-config.xml) as follows: <dataConfig> <dataSource name="mongod" type="JdbcDataSource" driver="com.mongodb. jdbc.MongoDriver" url="mongodb://localhost/solr-test"/> <document>    <entity name="nameage" dataSource="mongod" query="select name, price from grocery">        <field column="name" name="name"/>        <field column="name" name="id"/>        <!-- other files -->    </entity> </document> </dataConfig> Copy the solr-dataimporthandler-*.jar from your contrib directory to a container/application library path. Modify $SOLR_COLLECTION_ROOT/conf/solr-config.xml with DIH entry: <!-- DIH Starts --> <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">    <lst name="defaults">    <str name="config"><path to config>/data-source-config.xml</str>    </lst> </requestHandler>    <!-- DIH ends --> Once this configuration is done, you are ready to test it out. Access http://localhost:8983/solr/dataimport?command=full-import from your browser to run the full import on Apache Solr, where you will see that your import handler has successfully ran, and has loaded the data in Solr store, as shown in the following screenshot: You can validate the content created by your new MongoDB DIH by accessing the Solr Admin page, and running a query: Using this connector, you can perform operations for full-import on various data elements. Since MongoDB is not a relational database, it does support join queries. However, it supports selects, order by, and so on. Summary In this article, we have understood the distributed aspects of any enterprise search where went through Apache Solr and MongoDB together. Resources for Article: Further resources on this subject: Evolution of Hadoop [article] In the Cloud [article] Learning Data Analytics with R and Hadoop [article]
Read more
  • 0
  • 0
  • 9767

article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 11700

article-image-writing-simple-behaviors
Packt
27 Apr 2015
18 min read
Save for later

Writing Simple Behaviors

Packt
27 Apr 2015
18 min read
In this article by Richard Sneyd, the author of Stencyl Essentials, we will learn about Stencyl's signature visual programming interface to create logic and interaction in our game. We create this logic using a WYSIWYG (What You See Is What You Get) block snapping interface. By the end of this article, you will have the Player Character whizzing down the screen, in pursuit of a zigzagging air balloon! Some of the things we will learn to do in this article are as follows: Create Actor Behaviors, and attach them to Actor Types. Add Events to our Behaviors. Use If blocks to create branching, conditional logic to handle various states within our game. Accept and react to input from the player. Apply physical forces to Actors in real-time. One of the great things about this visual approach to programming is that it largely removes the unpleasantness of dealing with syntax (the rules of the programming language), and the inevitable errors that come with it, when we're creating logic for our game. That frees us to focus on the things that matter most in our games: smooth, well wrought game mechanics and enjoyable, well crafted game-play. (For more resources related to this topic, see here.) The Player Handler The first behavior we are going to create is the Player Handler. This behavior will be attached to the Player Character (PC), which exists in the form of the Cowboy Actor Type. This behavior will be used to handle much of the game logic, and will process the lion's share of player input. Creating a new Actor Behavior It's time to create our very first behavior! Go to the Dashboard, under the LOGIC heading, select Actor Behaviors: Click on This game contains no Logic. Click here to create one. to add your first behavior. You should see the Create New... window appear: Enter the Name Player Handler, as shown in the previous screenshot, then click Create. You will be taken to the Behavior Designer: Let's take a moment to examine the various areas within the Behavior Designer. From left to right, as demonstrated in the previous screenshot, we have: The Events Pane: Here we can add, remove, and move between events in our Behavior. The Canvas: To the center of the screen, the Canvas is where we drag blocks around to click our game logic together. The blocks Palette: This is where we can find any and all of the various logic blocks that Stencyl has on offer. Simply browse to your category of choice, then click and drag the block onto the Canvas to configure it. Follow these steps: Click the Add Event button, which can be found at the very top of the Events Pane. In the menu that ensues, browse to Basics and click When Updating: You will notice that we now have an Event in our Events Pane, called Updated, along with a block called always on our Canvas. In Stencyl events lingo, always is synonymous with When Updating: Since this is the only event in our Behavior at this time, it will be selected by default. The always block (yellow with a red flag) is where we put the game logic that needs to be checked on a constant basis, for every update of the game loop (this will be commensurate with the framerate at runtime, around 60fps, depending on the game performance and system specs). Before we proceed with the creation of our conditional logic, we must first create a few attributes. If you have a programming background, it is easiest to understand attributes as being synonymous to local variables. Just like variables, they have a set data type, and you can retrieve or change the value of an attribute in real-time. Creating Attributes Switch to the Attributes context in the blocks palette: There are currently no attributes associated with this behavior. Let's add some, as we'll need them to store important information of various types which we'll be using later on to craft the game mechanics. Click on the Create an Attribute button: In the Create an Attribute… window that appears, enter the Name Target Actor, set Type to Actor, check Hidden?, and press OK: Congratulations! If you look at the lower half of the blocks palette, you will see that you have added your first attribute, Target Actor, of type Actors, and it is now available for use in our code. Next, let's add five Boolean attributes. A Boolean is a special kind of attribute that can be set to either true, or false. Those are the only two values it can accept. First, let's create the Can Be Hurt Boolean: Click Create an Attribute…. Enter the Name Can Be Hurt. Change the Type to Boolean. Check Hidden?. Press OK to commit and add the attribute to the behavior. Repeat steps 1 through 5 for the remaining four Boolean attributes to be added, each time substituting the appropriate name:     Can Switch Anim     Draw Lasso     Lasso Back     Mouse Over If you have done this correctly, you should now see six attributes in your attributes list - one under Actor, and five under Boolean - as shown in the following screenshot: Now let's follow the same process to further create seven attributes; only this time, we'll set the Type for all of them to Number. The Name for each one will be: Health (Set to Hidden?). Impact Force (Set to Hidden?). Lasso Distance (Set to Hidden?). Max Health (Don't set to Hidden?). Turn Force (Don't set to Hidden?). X Point (Set to Hidden?). Y Point (Set to Hidden?). If all goes well, you should see your list of attributes update accordingly: We will add just one additional attribute. Click the Create an Attribute… button again: Name it Mouse State. Change its Type to Text. Do not hide this attribute. Click OK to commit and add the attribute to your behavior. Excellent work, at this point, you have created all of the attributes you will need for the Player Handler behavior! Custom events We need to create a few custom events in order to complete the code for this game prototype. For programmers, custom events are like functions that don't accept parameters. You simply trigger them at will to execute a reusable bunch of code. To accept parameters, you must create a custom block: Click Add Event. Browse to Advanced.. Select Custom Event: You will see that a second event, simply called Custom Event, has been added to our list of events: Now, double-click on the Custom Event in the events stack to change its label to Obj Click Check (For readability purposes, this does not affect the event's name in code, and is completely ignored by Stencyl): Now, let's set the name as it will be used in code. Click between When and happens, and insert the name ObjectClickCheck: From now on, whenever we want to call this custom event in our code, we will refer to it as ObjectClickCheck. Go back to the When Updating event by selecting it from the events stack on the left. We are going to add a special block to this event, which calls the custom event we created just a moment ago: In the blocks palette, go to Behaviour | Triggers | For Actor, then click and drag the highlighted block onto the canvas: Drop the selected block inside of the Always block, and fill in the fields as shown (please note that I have deliberately excluded the space between Player and Handler in the behavior name, so as to demonstrate the debugging workflow. This will be corrected in a later part of the article): Now, ObjectClickCheck will be executed for every iteration of the game loop! It is usually a good practice to split up your code like this, rather than having it all in one really long event. That would be confusing, and terribly hard to sift through when behaviors become more complex! Here is a chance to assess what you have learnt from this article thus far. We will create a second custom event; see if you can achieve this goal using only the skeleton guide mentioned next. If you struggle, simply refer back to the detailed steps we followed for the ObjectClickCheck event: Click Add Event | Advanced | Custom Event. A new event will appear at the bottom of the events pane. Double Click on the event in the events pane to change its label to Handle Dir Clicks for readability purposes. Between When and happens, enter the name HandleDirectionClicks. This is the handle we will use to refer to this event in code. Go back to the Updated event, right click on the Trigger event in behavior for self block that is already in the always block, and select copy from the menu. Right-click anywhere on the canvas and select paste from the menu to create an exact duplicate of the block. Change the event being triggered from ObjectClickCheck to HandleDirectionClicks. Keep the value PlayerHandler for the behavior field. Drag and drop the new block so that it sits immediately under the original. Holding Alt on the keyboard, and clicking and dragging on a block, creates a duplicate of that block. Were you successful? If so, you should see these changes and additions in your behavior (note that the order of the events in the events pane does not affect the game logic, or the order in which code is executed). Learning to create and utilize custom events in Stencyl is a huge step towards mastering the tool, so congratulations on having come this far! Testing and debugging As with all fields of programming and software development, it is important to periodically and iteratively test your code. It's much easier to catch and repair mistakes this way. On that note, let's test the code we've written so far, using print blocks. Browse to and select Flow | Debug | print from the blocks palette: Now, drag a copy of this block into both of your custom events, snapping it neatly into the when happens blocks as you do so. For the ObjectClickCheck event, type Object Click Detected into the print block For the HandleDirectionClicks event, type Directional Click Detected into the print block. We are almost ready to test our code. Since this is an Actor Behavior, however, and we have not yet attached it to our Cowboy actor, nothing would happen yet if we ran the code. We must also add an instance of the Cowboy actor to our scene: Click the Attach to Actor Type button to the top right of the blocks palette: Choose the Cowboy Actor from the ensuing list, and click OK to commit. Go back to the Dashboard, and open up the Level 1 scene. In the Palette to the right, switch from Tiles to Actors, and select the Cowboy actor: Ensure Layer 0 is selected (as actors cannot be placed on background layers). Click on the canvas to place an instance of the actor in the scene, then click on the Inspector, and change the x and y Scale of the actor to 0.8: Well done! You've just added your first behavior to an Actor Type, and added your first Actor Instance to a scene! We are now ready to test our code. First, Click the Log Viewer button on the toolbar: This will launch the Log Viewer. The Log Viewer will open up, at which point we need only set Platform to Flash (Player), and click the Test Game Button to compile and execute our code: After a few moments, if you have followed all of the steps correctly, you will see that the game windows opens on the screen and a number of events appear on the Log Viewer. However, none of these events have anything to do with the print blocks we added to our custom events. Hence, something has gone wrong, and must be debugged. What could it be? Well, since the blocks simply are not executing, it's likely a typo of some kind. Let's look at the Player Handler again, and you'll see that within the Updated event, we've referred to the behavior name as PlayerHandler in both trigger event blocks, with no space inserted between the words Player and Handler: Update both of these fields to Player Handler, and be sure to include the space this time, so that it looks like the following (To avoid a recurrence of this error, you may wish to use the dropdown menu by clicking the downwards pointing grey arrow, then selecting Behavior Names to choose your behavior from a comprehensive list): Great work! You have successfully completed your first bit of debugging in Stencyl. Click the Test Game button again. After the game window has opened, if you scroll down to the bottom of the Log Viewer, you should see the following events piling up: These INFO events are being triggered by the print blocks we inserted into our custom events, and prove that our code is now working. Excellent job! Let's move on to a new Actor; prepare to meet Dastardly Dan! Adding the Balloon Let's add the balloon actor to our game, and insert it into Level 1: Go to the Dashboard, and select Actor Types from the RESOURCES menu. Press Click here to create a new Actor Type. Name it Balloon, and click Create. Click on This Actor Type contains no animations. Click here to add an animation. Change the text in the Name field to Default. Un-check looping?. Press the Click here to add a frame. button. The Import Frame from Image Strip window appears. Change the Scale to 4x. Click Choose Image... then browse to Game AssetsGraphicsActor Animations and select Balloon.png. Keep Columns and Rows set to 1, and click Add to commit this frame to the animation. All animations are created with a box collision shape by default. In actuality, the Balloon actor requires no collisions at all, so let's remove it. Go to the Collision context, select the Default box, and press Delete on the keyboard: The Balloon Actor Type is now free of collision shapes, and hence will not interact physically with other elements of our game levels. Next, switch to the Physics context: Set the following attributes: Set What Kind of Actor Type? to Normal. Set Can Rotate? To No. This will disable all rotational physical forces and interactions. We can still rotate the actor by setting its rotation directly in the code, however. Set Affected by Gravity? to No. We will be handling the downward trajectory of this actor ourselves, without using the gravity implemented by the physics engine. Just before we add this new actor to Level 1, let's add a behavior or two. Switch to the Behaviors context: Then, follow these steps: This Actor Type currently has no attached behaviors. Click Add Behavior, at the bottom left hand corner of the screen: Under FROM YOUR LIBRARY, go to the Motion category, and select Always Simulate. The Always Simulate behavior will make this actor operational, even if it is not on the screen, which is a desirable result in this case. It also prevents Stencyl from deleting the actor when it leaves the scene, which it would automatically do in an effort to conserve memory, if we did not explicitly dictate otherwise. Click Choose to add it to the behaviors list for this Actor Type. You should see it appear in the list: Click Add Behavior again. This time, under FROM YOUR LIBRARY, go the Motion category once more, and this time select Wave Motion (you'll have to scroll down the list to see it). Click Choose to add it to the behavior stack. You should see it sitting under the Always Simulate behavior: Configuring prefab behaviors Prefab behaviors (also called shipped behaviors) enable us to implement some common functionality, without reinventing the wheel, so to speak. The great thing about these prefab behaviors, which can be found in the behavior library, is that they can be used as templates, and modified at will. Let's learn how to add and modify a couple of these prefab behaviors now. Some prefab behaviors have exposed attributes which can be configured to suit the needs of the project. The Wave Motion behavior is one such example. Select it from the stack, and configure the attributes as follows: Set Direction to Horizontal from the dropdown menu. Set Start Speed to 5. Set Amplitude to 64. Set Wavelength to 128. Fantastic! Now let's add an Instance of the Balloon actor to Level 1: Click the Add to Scene button at the top right corner of your view. Select the Level 1 scene. Select the Balloon. Click on the canvas, below the Cowboy actor, to place an instance of the Balloon in the scene: Modifying prefab behaviors Before we test the game one last time, we must quickly add a prefab behavior to the Cowboy Actor Type, modifying it slightly to suit the needs of this game (for instance, we will need to create an offset value for the y axis, so the PC is not always at the centre of the screen): Go to the Dashboard, and double click on the Cowboy from the Actor Types list. Switch to the Behavior Context. Click Add Behavior, as you did previously when adding prefab behaviors to the Balloon Actor Type. This time, under FROM YOUR LIBRARY, go to the Game category, and select Camera Follow. As the name suggests, this is a simple behavior that makes the camera follow the actor it is attached to. Click Choose to commit this behavior to the stack, and you should see this: Click the Edit Behavior button, and it will open up in the Behavior Designer: In the Behavior Designer, towards the bottom right corner of the screen, click on the Attributes tab: Once clicked, you will see a list of all the attributes in this behavior appear in the previous window. Click the Add Attribute button: Perform the following steps: Set the Name to Y Offset. Change the Type to Number. Leave the attribute unhidden. Click OK to commit new attribute to the attribute stack: We must modify the set IntendedCameraY block in both the Created and the Updated events: Holding Shift, click and drag the set IntendedCameraY block out onto the canvas by itself: Drag the y-center of Self block out like the following: Click the little downward pointing grey arrow at the right of the empty field in the set intendedCameraY block , and browse to Math | Arithmetic | Addition block: Drag the y-center of Self block into the left hand field of the Add block: Next, click the small downward pointing grey arrow to the right of the right hand field of the Addition block to bring up the same menu as before. This time, browse to Attributes, and select Y Offset: Now, right click on the whole block, and select Copy (this will copy it to the clipboard), then simply drag it back into its original position, just underneath set intendedCameraX: Switch to the Updated Event from the events pane on the left, hold Shift, then click and drag set intendedCameraY out of the Always block and drop it in the trash can, as you won't need it anymore. Right-click and select Paste to place a copy of the new block configuration you copied to the clipboard earlier: Click and drag the pasted block so that it appears just underneath the set intendedCameraX block, and save your changes: Testing the changes Go back to the Cowboy Actor Type, and open the Behavior context; click File | Reload Document (Ctrl-R or Cmd-R) to update all the changes. You should see a new configurable attribute for the Camera Follow Behavior, called Y Offset. Set its value to 70: Excellent! Now go back to the Dashboard and perform the following: Open up Level 1 again. Under Physics, set Gravity (Vertical) to 8.0. Click Test Game, and after a few moments, a new game window should appear. At this stage, what you should see is the Cowboy shooting down the hill with the camera following him, and the Balloon floating around above him. Summary In this article, we learned the basics of creating behaviors, adding and setting Attributes of various types, adding and modifying prefab behaviors, and even some rudimentary testing and debugging. Give yourself a pat on the back; you've learned a lot so far! Resources for Article: Further resources on this subject: Form Handling [article] Managing and Displaying Information [article] Background Animation [article]
Read more
  • 0
  • 0
  • 8067

article-image-using-basic-projectiles
Packt
27 Apr 2015
22 min read
Save for later

Using Basic Projectiles

Packt
27 Apr 2015
22 min read
"Flying is learning how to throw yourself at the ground and miss."                                                                                              – Douglas Adams In this article by Michael Haungs, author of the book Creative Greenfoot, we will create a simple game using basic movements in Greenfoot. Actors in creative Greenfoot applications, such as games and animations, often have movement that can best be described as being launched. For example, a soccer ball, bullet, laser, light ray, baseball, and firework are examples of this type of object. One common method of implementing this type of movement is to create a set of classes that model real-world physical properties (mass, velocity, acceleration, friction, and so on) and have game or simulation actors inherit from these classes. Some refer to this as creating a physics engine for your game or simulation. However, this course of action is complex and often overkill. There are often simple heuristics we can use to approximate realistic motion. This is the approach we will take here. In this article, you will learn about the basics of projectiles, how to make an object bounce, and a little about particle effects. We will apply what you learn to a small platform game that we will build up over the course of this article. Creating realistic flying objects is not simple, but we will cover this topic in a methodical, step-by-step approach, and when we are done, you will be able to populate your creative scenarios with a wide variety of flying, jumping, and launched objects. It's not as simple as Douglas Adams makes it sound in his quote, but nothing worth learning ever is. (For more resources related to this topic, see here.) Cupcake Counter It is beneficial to the learning process to discuss topics in the context of complete scenarios. Doing this forces us to handle issues that might be elided in smaller, one-off examples. In this article, we will build a simple platform game called Cupcake Counter (shown in Figure 1). We will first look at a majority of the code for the World and Actor classes in this game without showing the code implementing the topic of this article, that is, the different forms of projectile-based movement. Figure 1: This is a screenshot of Cupcake Counter How to play The goal of Cupcake Counter is to collect as many cupcakes as you can before being hit by either a ball or a fountain. The left and right arrow keys move your character left and right and the up arrow key makes your character jump. You can also use the space bar key to jump. After touching a cupcake, it will disappear and reappear randomly on another platform. Balls will be fired from the turret at the top of the screen and fountains will appear periodically. The game will increase in difficulty as your cupcake count goes up. The game requires good jumping and avoiding skills. Implementing Cupcake Counter Create a scenario called Cupcake Counter and add each class to it as they are discussed. The CupcakeWorld class This subclass of World sets up all the actors associated with the scenario, including a score. It is also responsible for generating periodic enemies, generating rewards, and increasing the difficulty of the game over time. The following is the code for this class: import greenfoot.*; import java.util.List;   public class CupcakeWorld extends World { private Counter score; private Turret turret; public int BCOUNT = 200; private int ballCounter = BCOUNT; public int FCOUNT = 400; private int fountainCounter = FCOUNT; private int level = 0; public CupcakeWorld() {    super(600, 400, 1, false);    setPaintOrder(Counter.class, Turret.class, Fountain.class,    Jumper.class, Enemy.class, Reward.class, Platform.class);    prepare(); } public void act() {    checkLevel(); } private void checkLevel() {    if( level > 1 ) generateBalls();    if( level > 4 ) generateFountains();    if( level % 3 == 0 ) {      FCOUNT--;      BCOUNT--;      level++;    } } private void generateFountains() {    fountainCounter--;    if( fountainCounter < 0 ) {      List<Brick> bricks = getObjects(Brick.class);      int idx = Greenfoot.getRandomNumber(bricks.size());      Fountain f = new Fountain();      int top = f.getImage().getHeight()/2 + bricks.get(idx).getImage().getHeight()/2;      addObject(f, bricks.get(idx).getX(),      bricks.get(idx).getY()-top);      fountainCounter = FCOUNT;    } } private void generateBalls() {    ballCounter--;    if( ballCounter < 0 ) {      Ball b = new Ball();      turret.setRotation(15 * -b.getXVelocity());      addObject(b, getWidth()/2, 0);      ballCounter = BCOUNT;    } } public void addCupcakeCount(int num) {    score.setValue(score.getValue() + num);    generateNewCupcake(); } private void generateNewCupcake() {    List<Brick> bricks = getObjects(Brick.class);    int idx = Greenfoot.getRandomNumber(bricks.size());    Cupcake cake = new Cupcake();    int top = cake.getImage().getHeight()/2 +    bricks.get(idx).getImage().getHeight()/2;    addObject(cake, bricks.get(idx).getX(),    bricks.get(idx).getY()-top); } public void addObjectNudge(Actor a, int x, int y) {    int nudge = Greenfoot.getRandomNumber(8) - 4;    super.addObject(a, x + nudge, y + nudge); } private void prepare(){    // Add Bob    Bob bob = new Bob();    addObject(bob, 43, 340);    // Add floor    BrickWall brickwall = new BrickWall();    addObject(brickwall, 184, 400);    BrickWall brickwall2 = new BrickWall();    addObject(brickwall2, 567, 400);    // Add Score    score = new Counter();    addObject(score, 62, 27);    // Add turret    turret = new Turret();    addObject(turret, getWidth()/2, 0);    // Add cupcake    Cupcake cupcake = new Cupcake();    addObject(cupcake, 450, 30);    // Add platforms    for(int i=0; i<5; i++) {      for(int j=0; j<6; j++) {        int stagger = (i % 2 == 0 ) ? 24 : -24;        Brick brick = new Brick();        addObjectNudge(brick, stagger + (j+1)*85, (i+1)*62);      }    } } } Let's discuss the methods in this class in order. First, we have the class constructor CupcakeWorld(). After calling the constructor of the superclass, it calls setPaintOrder() to set the actors that will appear in front of other actors when displayed on the screen. The main reason why we use it here, is so that no actor will cover up the Counter class, which is used to display the score. Next, the constructor method calls prepare() to add and place the initial actors into the scenario. Inside the act() method, we will only call the function checkLevel(). As the player scores points in the game, the level variable of the game will also increase. The checkLevel() function will change the game a bit according to its level variable. When our game first starts, no enemies are generated and the player can easily get the cupcake (the reward). This gives the player a chance to get accustomed to jumping on platforms. As the cupcake count goes up, balls and fountains will be added. As the level continues to rise, checkLevel() reduces the delay between creating balls (BCOUNT) and fountains (FCOUNT). The level variable of the game is increased in the addCupcakeCount() method. The generateFountains() method adds a Fountain actor to the scenario. The rate at which we create fountains is controlled by the delay variable fountainContainer. After the delay, we create a fountain on a randomly chosen Brick (the platforms in our game). The getObjects() method returns all of the actors of a given class presently in the scenario. We then use getRandomNumber() to randomly choose a number between one and the number of Brick actors. Next, we use addObject() to place the new Fountain object on the randomly chosen Brick object. Generating balls using the generateBalls() method is a little easier than generating fountains. All balls are created in the same location as the turret at the top of the screen and sent from there with a randomly chosen trajectory. The rate at which we generate new Ball actors is defined by the delay variable ballCounter. Once we create a Ball actor, we rotate the turret based on its x velocity. By doing this, we create the illusion that the turret is aiming and then firing Ball Actor. Last, we place the newly created Ball actor into the scenario using the addObject() method. The addCupcakeCount() method is called by the actor representing the player (Bob) every time the player collides with Cupcake. In this method, we increase score and then call generateNewCupcake() to add a new Cupcake actor to the scenario. The generateNewCupcake() method is very similar to generateFountains(), except for the lack of a delay variable, and it randomly places Cupcake on one of the bricks instead of a Fountain actor. In all of our previous scenarios, we used a prepare() method to add actors to the scenario. The major difference between this prepare() method and the previous ones, is that we use the addObjectNudge() method instead of addObject() to place our platforms. The addObjectNudge() method simply adds a little randomness to the placement of the platforms, so that every new game is a little different. The random variation in the platforms will cause the Ball actors to have different bounce patterns and require the player to jump and move a bit more carefully. In the call to addObjectNudge(), you will notice that we used the numbers 85 and 62. These are simply numbers that spread the platforms out appropriately, and they were discovered through trial and error. I created a blue gradient background to use for the image of CupcakeWorld. Enemies In Cupcake Counter, all of the actors that can end the game if collided with are subclasses of the Enemy class. Using inheritance is a great way to share code and reduce redundancy for a group of similar actors. However, we often will create class hierarchies in Greenfoot solely for polymorphism. Polymorphism refers to the ability of a class in an object-orientated language to take on many forms. We are going to use it, so that our player actor only has to check for collision with an Enemy class and not every specific type of Enemy, such as Ball or RedBall. Also, by coding this way, we are making it very easy to add code for additional enemies, and if we find that our enemies have redundant code, we can easily move that code into our Enemy class. In other words, we are making our code extensible and maintainable. Here is the code for our Enemy class: import greenfoot.*;   public abstract class Enemy extends Actor { } The Ball class extends the Enemy class. Since Enemy is solely used for polymorphism, the Ball class contains all of the code necessary to implement bouncing and an initial trajectory. Here is the code for this class: import greenfoot.*;   public class Ball extends Enemy { protected int actorHeight; private int speedX = 0; public Ball() {    actorHeight = getImage().getHeight();    speedX = Greenfoot.getRandomNumber(8) - 4;    if( speedX == 0 ) {      speedX = Greenfoot.getRandomNumber(100) < 50 ? -1 : 1;    } } public void act() {    checkOffScreen(); } public int getXVelocity() {    return speedX; } private void checkOffScreen() {    if( getX() < -20 || getX() > getWorld().getWidth() + 20 ) {      getWorld().removeObject(this);    } else if( getY() > getWorld().getHeight() + 20 ) {      getWorld().removeObject(this);    } } } The implementation of Ball is missing the code to handle moving and bouncing. As we stated earlier, we will go over all the projectile-based code after providing the code we are using as the starting point for this game. In the Ball constructor, we randomly choose a speed in the x direction and save it in the speedX instance variable. We have included one accessory method to return the value of speedX (getXVelocity()). Last, we include checkOffScreen() to remove Ball once it goes off screen. If we do not do this, we would have a form of memory leak in our application because Greenfoot will continue to allocate resources and manage any actor until it is removed from the scenario. For the Ball class, I choose to use the ball.png image, which comes with the standard installation of Greenfoot. In this article, we will learn how to create a simple particle effect. Creating an effect is more about the use of a particle as opposed to its implementation. In the following code, we create a generic particle class, Particles, that we will extend to create a RedBall particle. We have organized the code in this way to easily accommodate adding particles in the future. Here is the code: import greenfoot.*;   public class Particles extends Enemy { private int turnRate = 2; private int speed = 5; private int lifeSpan = 50; public Particles(int tr, int s, int l) {    turnRate = tr;    speed = s;    lifeSpan = l;    setRotation(-90); } public void act() {    move();    remove(); } private void move() {    move(speed);    turn(turnRate); } private void remove() {    lifeSpan--;    if( lifeSpan < 0 ) {      getWorld().removeObject(this);    } } } Our particles are implemented to move up and slightly turn each call of the act() method. A particle will move lifeSpan times and then remove itself. As you might have guessed, lifeSpan is another use of a delay variable. The turnRate property can be either positive (to turn slightly right) or negative (to turn slightly left). We only have one subclass of Particles, RedBall. This class supplies the correct image for RedBall, supplies the required input for the Particles constructor, and then scales the image according to the parameters scaleX and scaleY. Here's the implementation: import greenfoot.*;   public class RedBall extends Particles { public RedBall(int tr, int s, int l, int scaleX, int scaleY) {    super(tr, s, l);    getImage().scale(scaleX, scaleY); } } For RedBall, I used the Greenfoot-supplied image red-draught.png. Fountains In this game, fountains add a unique challenge. After reaching level five (see the World class CupcakeWorld), Fountain objects will be generated and randomly placed in the game. Figure 2 shows a fountain in action. A Fountain object continually spurts RedBall objects into the air like water from a fountain. Figure 2: This is a close-up of a Fountain object in the game Cupcake Counter Let's take a look at the code that implements the Fountain class: import greenfoot.*; import java.awt.Color;   public class Fountain extends Actor { private int lifespan = 75; private int startDelay = 100; private GreenfootImage img; public Fountain() {    img = new GreenfootImage(20,20);    img.setColor(Color.blue);    img.setTransparency(100);    img.fill();    setImage(img); } public void act() {    if( --startDelay == 0 ) wipeView();    if( startDelay < 0 ) createRedBallShower(); } private void wipeView() {    img.clear(); } private void createRedBallShower() { } } The constructor for Fountain creates a new blue, semitransparent square and sets that to be its image. We start with a blue square to give the player of the game a warning that a fountain is about to erupt. Since fountains are randomly placed at any location, it would be unfair to just drop one on our player and instantly end the game. This is also why RedBall is a subclass of Enemy and Fountain is not. It is safe for the player to touch the blue square. The startDelay delay variable is used to pause for a short amount of time, then remove the blue square (using the function wipeView()), and then start the RedBall shower (using the createRedBallShower() function). We can see this in the act() method. Turrets In the game, there is a turret in the top-middle of the screen that shoots purple bouncy balls at the player. It is shown in Figure 1. Why do we use a bouncy-ball shooting turret? Because this is our game and we can! The implementation of the Turret class is very simple. Most of the functionality of rotating the turret and creating Ball to shoot is handled by CupcakeWorld in the generateBalls() method already discussed. The main purpose of this class is to just draw the initial image of the turret, which consists of a black circle for the base of the turret and a black rectangle to serve as the cannon. Here is the code: import greenfoot.*; import java.awt.Color;   public class Turret extends Actor { private GreenfootImage turret; private GreenfootImage gun; private GreenfootImage img; public Turret() {    turret = new GreenfootImage(30,30);    turret.setColor(Color.black);    turret.fillOval(0,0,30,30);       gun = new GreenfootImage(40,40);    gun.setColor(Color.black);    gun.fillRect(0,0,10,35);       img = new GreenfootImage(60,60);    img.drawImage(turret, 15, 15);    img.drawImage(gun, 25, 30);    img.rotate(0);       setImage(img); } } We previously talked about the GreenfootImage class and how to use some of its methods to do custom drawing. One new function we introduced is drawImage(). This method allows you to draw one GreenfootImage into another. This is how you compose images, and we used it to create our turret from a rectangle image and a circle image. Rewards We create a Reward class for the same reason we created an Enemy class. We are setting ourselves up to easily add new rewards in the future. Here is the code: import greenfoot.*;   public abstract class Reward extends Actor { } The Cupcake class is a subclass of the Reward class and represents the object on the screen the player is constantly trying to collect. However, cupcakes have no actions to perform or state to keep track of; therefore, its implementation is simple: import greenfoot.*;   public class Cupcake extends Reward { } When creating this class, I set its image to be muffin.png. This is an image that comes with Greenfoot. Even though the name of the image is a muffin, it still looks like a cupcake to me. Jumpers The Jumper class is a class that will allow all subclasses of it to jump when pressing either the up arrow key or the spacebar. At this point, we just provide a placeholder implementation: import greenfoot.*;   public abstract class Jumper extends Actor { protected int actorHeight; public Jumper() {    actorHeight = getImage().getHeight(); } public void act() {    handleKeyPresses(); } protected void handleKeyPresses() { } } The next class we are going to present is the Bob class. The Bob class extends the Jumper class and then adds functionality to let the player move it left and right. Here is the code: import greenfoot.*;   public class Bob extends Jumper { private int speed = 3; private int animationDelay = 0; private int frame = 0; private GreenfootImage[] leftImages; private GreenfootImage[] rightImages; private int actorWidth; private static final int DELAY = 3; public Bob() {    super();       rightImages = new GreenfootImage[5];    leftImages = new GreenfootImage[5];       for( int i=0; i<5; i++ ) {      rightImages[i] = new GreenfootImage("images/Dawson_Sprite_Sheet_0" + Integer.toString(3+i) + ".png");      leftImages[i] = new GreenfootImage(rightImages[i]);      leftImages[i].mirrorHorizontally();    }       actorWidth = getImage().getWidth(); } public void act() {    super.act();    checkDead();    eatReward(); } private void checkDead() {    Actor enemy = getOneIntersectingObject(Enemy.class);    if( enemy != null ) {      endGame();    } } private void endGame() {    Greenfoot.stop(); } private void eatReward() {    Cupcake c = (Cupcake) getOneIntersectingObject(Cupcake.class);    if( c != null ) {      CupcakeWorld rw = (CupcakeWorld) getWorld();      rw.removeObject(c);      rw.addCupcakeCount(1);    } } // Called by superclass protected void handleKeyPresses() {    super.handleKeyPresses();       if( Greenfoot.isKeyDown("left") ) {      if( canMoveLeft() ) {moveLeft();}    }    if( Greenfoot.isKeyDown("right") ) {      if( canMoveRight() ) {moveRight();}    } } private boolean canMoveLeft() {    if( getX() < 5 ) return false;    return true; } private void moveLeft() {    setLocation(getX() - speed, getY());    if( animationDelay % DELAY == 0 ) {      animateLeft();      animationDelay = 0;    }    animationDelay++; } private void animateLeft() {    setImage( leftImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } private boolean canMoveRight() {    if( getX() > getWorld().getWidth() - 5) return false;    return true; } private void moveRight() {    setLocation(getX() + speed, getY());    if( animationDelay % DELAY == 0 ) {      animateRight();      animationDelay = 0;    }    animationDelay++; } private void animateRight() {    setImage( rightImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } } Like CupcakeWorld, this class is substantial. We will discuss each method it contains sequentially. First, the constructor's main duty is to set up the images for the walking animation. The images came from www.wikia.com and were supplied, in the form of a sprite sheet, by the user Mecha Mario. A direct link to the sprite sheet is http://smbz.wikia.com/wiki/File:Dawson_Sprite_Sheet.PNG. Note that I manually copied and pasted the images I used from this sprite sheet using my favorite image editor. Free Internet resources Unless you are also an artist or a musician in addition to being a programmer, you are going to be hard pressed to create all of the assets you need for your Greenfoot scenario. If you look at the credits for AAA video games, you will see that the number of artists and musicians actually equal or even outnumber the programmers. Luckily, the Internet comes to the rescue. There are a number of websites that supply legally free assets you can use. For example, the website I used to get the images for the Bob class supplies free content under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA) license. It is very important that you check the licensing used for any asset you download off the Internet and follow those user agreements carefully. In addition, make sure that you fully credit the source of your assets. For games, you should include a Credits screen to cite all the sources for the assets you used. The following are some good sites for free, online assets: www.wikia.com newgrounds.com http://incompetech.com opengameart.org untamed.wild-refuge.net/rpgxp.php Next, we have the act() method. It first calls the act() method of its superclass. It needs to do this so that we get the jumping functionality that is supplied by the Jumper class. Then, we call checkDead() and eatReward(). The checkDead()method ends the game if this instance of the Bob class touches an enemy, and eatReward() adds one to our score, by calling the CupcakeWorld method addCupcakeCount(), every time it touches an instance of the Cupcake class. The rest of the class implements moving left and right. The main method for this is handleKeyPresses(). Like in act(), the first thing we do, is call handleKeyPresses() contained in the Jumper superclass. This runs the code in Jumper that handles the spacebar and up arrow key presses. The key to handling key presses is the Greenfoot method isKeyDown() (see the following information box). We use this method to check if the left arrow or right arrow keys are presently being pressed. If so, we check whether or not the actor can move left or right using the methods canMoveLeft() and canMoveRight(), respectively. If the actor can move, we then call either moveLeft() or moveRight(). Handling key presses in Greenfoot The second tutorial explains how to control actors with the keyboard. To refresh your memory, we are going to present some information on the keyboard control here. The primary method we use in implementing keyboard control is isKeyDown(). This method provides a simple way to check whether a certain key is being pressed. Here is an excerpt from Greenfoot's documentation: public static boolean isKeyDown(java.lang.String keyName) Check whether a given key is currently pressed down.   Parameters: keyName:This is the name of the key to check.   This returns : true if the key is down.   Using isKeyDown() is easy. The ease of capturing and using input is one of the major strengths of Greenfoot. Here is example code that will pause the execution of the game if the "p" key is pressed:   if( Greenfoot.isKeyDown("p") { Greenfoot.stop(); } Next, we will discuss canMoveLeft(), moveLeft(), and animateLeft(). The canMoveRight(), moveRight(), and animateRight()methods mirror their functionality and will not be discussed. The sole purpose of canMoveLeft() is to prevent the actor from walking off the left-hand side of the screen. The moveLeft() method moves the actor using setLocation() and then animates the actor to look as though it is moving to the left-hand side. It uses a delay variable to make the walking speed look natural (not too fast). The animateLeft() method sequentially displays the walking-left images. Platforms The game contains several platforms that the player can jump or stand on. The platforms perform no actions and only serve as placeholders for images. We use inheritance to simplify collision detection. Here is the implementation of Platform: import greenfoot.*;   public class Platform extends Actor { } Here's the implementation of BrickWall: import greenfoot.*;   public class BrickWall extends Platform { } Here's the implementation of Brick: import greenfoot.*;   public class Brick extends Platform { } You should now be able to compile and test Cupcake Counter. Make sure you handle any typos or other errors you introduced while inputting the code. For now, you can only move left and right. Summary We have created a simple game using basic movements in Greenfoot. Resources for Article: Further resources on this subject: A Quick Start Guide to Scratch 2.0 [article] Games of Fortune with Scratch 1.4 [article] Cross-platform Development - Build Once, Deploy Anywhere [article]
Read more
  • 0
  • 0
  • 5898

article-image-resource-manager-centos-6
Packt
27 Apr 2015
19 min read
Save for later

Resource Manager on CentOS 6

Packt
27 Apr 2015
19 min read
In this article is written by Mitja Resman, author of the book CentOS High Availability, we will learn cluster resource management on CentOS 6 with the RGManager cluster resource manager. We will learn how and where to find the information you require about the cluster resources that are supported by RGManager, and all the details about cluster resource configuration. We will also learn how to add, delete, and reconfigure resources and services in your cluster. Then we will learn how to start, stop, and migrate resources from one cluster node to another. When we are done with this article, your cluster will be configured to run and provide end users with a service. (For more resources related to this topic, see here.) Working with RGManager When we work with RGManager, the cluster resources are configured within the /etc/cluster/cluster.conf CMAN configuration file. RGManager has a dedicated section in the CMAN configuration file defined by the <rm> tag. Part of configuration within the <rm> tag belongs to RGManager. The RGManager section begins with the <rm> tag and ends with the </rm> tag. This syntax is common for XML files. The RGManager section must be defined within the <cluster> section of the CMAN configuration file but not within the <clusternodes> or <fencedevices> sections. We will be able to review the exact configuration syntax from the example configuration file provided in the next paragraphs. The following elements can be used within the <rm> RGManager tag: Failover Domain: (tag: <failoverdomains></failoverdomains>): A failover domain is a set of cluster nodes that are eligible to run a specific cluster service in the event of a cluster node failure. More than one failure domain can be configured with different rules applied for different cluster services. Global Resources: (tag: <resources></resources>): Global cluster resources are globally configured resources that can be related when configuring cluster services. Global cluster resources simplify the process of cluster service configuration by global resource name reference. Cluster Service: (tag: <service></service>): A cluster service usually defines more than one resource combined to provide a cluster service. The order of resources provided within a cluster service is important because it defines the resource start and stop order. The most used and important RGManager command-line expressions are as follows: clustat: The clustat command provides cluster status information. It also provides information about the cluster, cluster nodes, and cluster services. clusvcadm: The clusvcadm command provides cluster service management commands such as start, stop, disable, enable, relocate, and others. By default, RGManager logging is configured to log information related to RGManager to the syslog/var/log/messages file. If the logfile parameter in the Corosync configuration file's logging section is configured, information related to RGManager will be logged in the location specified by the logfile parameter. The default RGManager log file is named rgmanager.log. Let's start with the details of RGManager configuration. Configuring failover domains The <rm> tag in the CMAN configuration file usually begins with the definition of a failover domain, but configuring a failover domain is not required for normal operation of the cluster. A failover domain is a set of cluster nodes with configured failover rules. The failover domain is attached to the cluster service configuration; in the event of a cluster node failure, the configured cluster service's failover domain rules are applied. Failover domains are configured within the <rm> RGManager tag. The failover domain configuration begins with the <failoverdomains> tag and ends with the </failoverdomains> tag. Within the <failoverdomains> tag, you can specify one or more failover domains in the following form: <failoverdomain failoverdomainname failoverdomain_options> </failoverdomain> The failoverdomainname parameter is a unique name provided for the failover domain in the form of name="desired_name". The failoverdomain_options options are the rules that we apply to the failover domain. The following rules can be configured for a failover domain: Unrestricted: (parameter: restricted="0"): This failover domain configuration allows you to run a cluster service on any of the configured cluster nodes. Restricted: (parameter: restricted="1"): This failover domain configuration allows you to restrict a cluster service to run on the members you configure. Ordered: (parameter: ordered="1"): This failover domain configuration allows you to configure a preference order for cluster nodes. In the event of cluster node failure, the preference order is taken into account. The order of the listed cluster nodes is important because it is also the priority order. Unordered: (parameter: ordered="0"): This failover domain configuration allows any of the configured cluster nodes to run a specific cluster service. Failback: (parameter: nofailback="0"): This failover domain configuration allows you to configure failback for the cluster service. This means the cluster service will fail back to the originating cluster node once the cluster node is operational. Nofailback: (parameter: nofailback="1"): This failover domain configuration allows you to disable the failback of the cluster service back to the originating cluster node once it is operational. Within the <failoverdomain> tag, the desired cluster nodes are configured with a <failoverdomainnode> tag in the following form: <failoverdomainnode nodename/> The nodename parameter is the cluster node name as provided in the <clusternode> tag of the CMAN configuration file. You can add the following simple failover domain configuration to your existing CMAN configuration file. In the following screenshot, you can see the CMAN configuration file with a simple failover domain configuration. The previous example shows a failover domain named simple with no failback, no ordering, and no restrictions configured. All three cluster nodes are listed as failover domain nodes. Note that it is important to change the config_version parameter in the second line on every CMAN cluster configuration file. Once you have configured the failover domain, you need to validate the cluster configuration file. A valid CMAN configuration is required for normal operation of the cluster. If the validation of the cluster configuration file fails, recheck the configuration file for common typo errors. In the following screenshot, you can see the command used to check the CMAN configuration file for errors: Note that, if a specific cluster node is not online, the configuration file will have to be transferred manually and the cluster stack software will have to be restarted to catch up once it comes back online. Once your configuration is validated, you can propagate it to other cluster nodes. In this screenshot, you can see the CMAN configuration file propagation command used on the node-1 cluster node: For successful CMAN configuration file distribution to the other cluster nodes, the CMAN configuration file's config_version parameter number must be increased. You can confirm that the configuration file was successfully distributed by issuing the ccs_config_dump command on any of the other cluster nodes and comparing the XML output. Adding cluster resources and services The difference between cluster resources and cluster services is that a cluster service is a service built from one or more cluster resources. A configured cluster resource is prepared to be used within a cluster service. When you are configuring a cluster service, you reference a configured cluster resource by its unique name. Resources Cluster resources are defined within the <rm> RGManager tag of the CMAN configuration file. They begin with the <resources> tag and end with the </resources> tag. Within the <resources> tag, all cluster resources supported by RGManager can be configured. Cluster resources are configured with resource scripts, and all RGManager-supported resource scripts are located in the /usr/share/cluster directory along with the cluster resource metadata information required to configure a cluster resource. For some cluster resources, the metadata information is listed within the cluster resource scripts, while others have separate cluster resource metadata files. RGManager reads metadata from the scripts while validating the CMAN configuration file. Therefore, knowing the metadata information is the best way to correctly define and configure a cluster resource. The basic syntax used to configure a cluster resource is as follows: <resource_agent_name resource_options"/> The resource_agent_name parameter is provided in the cluster resource metadata information and is defined as name. The resource_options option is cluster resource-configurable options as provided in the cluster resource metadata information. If you want to configure an IP address cluster resource, you should first review the IP address of the cluster resource metadata information, which is available in the /usr/share/cluster/ip.sh script file. The syntax used to define an IP address cluster resource is as follows: <ip ip_address_options/> We can configure a simple IPv4 IP address, such as 192.168.88.50, and bind it to the eth1 network interface by adding the following line to the CMAN configuration: <ip address="192.168.88.50" family="IPv4" prefer_interface="eth1"/> The address option is the IP address you want to configure. The family option is the address protocol family. The prefer_interface option binds the IP address to the specific network interface. By reviewing the IP address of resource metadata information we can see that a few additional options are configurable and well explained: monitor_link nfslock sleeptime disable_rdisc If you want to configure an Apache web server cluster resource, you should first review the Apache web server resource's metadata information in the /usr/share/cluster/apache.metadata metadata file. The syntax used to define an Apache web server cluster resource is as follows: <apache apache_web_server_options/> We can configure a simple Apache web server cluster resource by adding the following line to the CMAN configuration file: <apache name="apache" server_root="/etc/httpd" config_file="conf/httpd.conf" shutdown_wait="60"/> The name parameter is the unique name provided for the apache cluster resource. The server_root option provides the Apache installation location. If no server_root option is provided, the default value is /etc/httpd. The config_file option is the path to the main Apache web server configuration file from the server_root file. If no config_file option is provided, the default value is conf/httpd.conf. The shutdown_wait option is the number of seconds to wait before the correct end-of-service shutdown. By reviewing the Apache web server resource metadata, you can see that a few additional options are configurable and well explained: httpd httpd_options service_name We can add the IP address and Apache web server cluster resources to the example configuration we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> Do not forget to increase the config_version parameter number. Make sure that you the validate cluster configuration file with every change. In the following screenshot, you can see the command used to validate the CMAN configuration: After we've validated our configuration, we can distribute the cluster configuration file to other nodes. In this screenshot, you can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Services The cluster services are defined within the <rm> RGManager tag of the CMAN configuration file after the cluster resources tag. They begin with the <service> tag and end with the </service> tag. The syntax used to define a service is as follows: <service service_options> </service> The resources within the cluster services are referenced to the globally configured cluster resources. The order of the cluster resources configured within the cluster service is important. This is because it is also a resource start order. The syntax for cluster resource configuration within the cluster service is as follows: <service service_options> <resource_agent_name ref="referenced_cluster_resource_name"/> </service> The service options can be the following: Autostart: (parameter: autostart="1"): This parameter starts services when RGManager starts. By default, RGManager starts all services when it is started and Quorum is present. Noautostart (parameter: autostart="0"): This parameter disables the start of all services when RGManager starts. Restart recovery (parameter: recovery="restart"): This is RGManager's default recovery policy. On failure, RGManager will restart the service on the same cluster node. If the service restart fails, RGManager will relocate the service to another operational cluster node. Relocate recovery (parameter: recovery="relocate"): On failure, RGManager will try to start the service on other operational cluster nodes. Disable recovery (parameter: recovery="disable"): On failure RGManager, will place the service in the disabled state. Restart disable recovery (parameter: recovery="restart-disable"): On failure, RGManager will try to restart the service on the same cluster node. If the restart fails, it will place the service in the disabled state. Additional restart policy extensions are available, as follows: Maximum restarts (parameter: max_restarts="N"; where N is the desired integer value): the maximum restarts parameter is defined by an integer that specifies the maximum number of service restarts before taking additional recovery policy actions Restart expire time (parameter: restart_expire_time="N"; where N is the desired integer value in seconds): The restart expire time parameter is defined by an integer value in seconds, and configures the time to remember a restart event We can configure a web server cluster service with respect to the configured IP address and Apache web server resources with the following CMAN configuration file syntax: <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.88.50"/> <apache ref="apache"/> </service> A minimal configuration of a web server cluster service requires a cluster IP address and an Apache web server resource. The name parameter defines a unique name for the web server cluster service. The autostart parameter defines an automatic start of the webserver cluster service on RGManager startup. The recovery parameter configures the restart of the web server cluster service on other cluster nodes in the event of failure. We can add the web server cluster service to the example CMAN configuration file we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.10.50"/> <apache ref="apache"/> </service> Do not forget to increase the config_version parameter. Make sure you validate the cluster configuration file with every change. In the following screenshot, we can see the command used to validate the CMAN configuration: After you've validated your configuration, you can distribute the cluster configuration file to other nodes. In this screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: With the final distribution of the cluster configuration, a cluster service is configured and RGManager starts the cluster service called webserver. You can use the clustat command to check whether the web server cluster service was successfully started and also which cluster node it is running on. In the following screenshot, you can see the clustat command issued on the node-1 cluster node: Let's take a look at the following terms: Service Name: This column defines the name of the service as configured in the CMAN configuration file. Owner: This column lists the node the service is running on or was last running on. State: This column provides information about the status of the service. Managing cluster services Once you have configured the cluster services as you like, you must learn how to manage them. We can manage cluster services with the clusvcadm command and additional parameters. The syntax of the clusvcadm command is as follows: clusvcadm [parameter] With the clusvcadm command, you can perform the following actions: Disable service (syntax: clusvcadm -d <service_name>): This stops the cluster service and puts it into the disabled state. This is the only permitted operation if the service in question is in the failed state. Start service (syntax: clusvcadm -e <service_name> -m <cluster_node>): This starts a non-running cluster service. It optionally provides the cluster node name you would like to start the service on. Relocate service (syntax: clusvcadm -r <service_name> -m <cluster_node>): This stops the cluster service and starts it on a different cluster node as provided with the -m parameter. Migrate service (syntax: clusvcadm -M <service_name> -m <cluster_node>): Note that this applies only to virtual machine live migrations. Restart service (syntax: clusvcadm -R <service_name>): This stops and starts a cluster service on the same cluster node. Stop service (syntax: clusvcadm -s <service_name>): This stops the cluster service and keeps it on the current cluster node in the stopped state. Freeze service (syntax: clusvcadm -Z <service_name>): This keeps the cluster service running on the current cluster node but disables service status checks and service failover in the event of a cluster node failure. Unfreeze service (syntax: clusvcadm -U <service_name>): This takes the cluster service out of the frozen state and enables service status checks and failover. We can continue with the previous example and migrate the webserver cluster service from the currently running node-1 cluster node to the node-3 cluster node. To achieve cluster service relocation, the clusvcadm command with the relocate service parameter must be used, as follows. In the following screenshot, we can see the command issued to migrate the webserver cluster service to the node-3 cluster node: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -r webserver parameter provides information that we need to relocate a cluster service named webserver. The -m node-3 command provides information on where we want to relocate the cluster service. Once the cluster service migration command completes, the webserver cluster service will be relocated to the node-3 cluster node. The clustat command shows that the webserver service is now running on the node-3 cluster node. In this screenshot, we can see that the webserver cluster service was successfully relocated to the node-3 cluster node: We can easily stop the webserver cluster service by issuing the appropriate command. In the following screenshot, we can see the command used to stop the webserver cluster service: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -s webserver parameter provides the information that you require to stop a cluster service named webserver. Another take at the clustat command should show that the webserver cluster service has stopped; it also provides the information that the last owner of the running webserver cluster service is the node-3 cluster node. In this screenshot, we can see the output of the clustat command, showing that the webserver cluster service is running on the node-3 cluster node: If we want to start the webserver cluster service on the node-1 cluster node, we can do this by issuing the appropriate command. In the following screenshot, we can see the command used to start the webserver cluster service on the node-1 cluster node: clusvcadm is the cluster service command used to administer and manage cluster services. The -e webserver parameter provides the information that you need to start a webserver cluster service. The -m node-1 parameter provides the information that you need to start the webserver cluster service on the node-1 cluster node. As expected, another look at the clustat command should make it clear that the webserver cluster service has started on the node-1 cluster node, as follows. In this screenshot, you can see the output of the clustat command, showing that the webserver cluster service is running on the node -1 cluster node: Removing cluster resources and services Removing cluster resources and services is the reverse of adding them. Resources and services are removed by editing the CMAN configuration file and removing the lines that define the resources or services you would like to remove. When removing cluster resources, it is important to verify that the resources are not being used within any of the configured or running cluster services. As always, when editing the CMAN configuration file, the config_version parameter must be increased. Once the CMAN configuration file is edited, you must run the CMAN configuration validation check for errors. When the CMAN configuration file validation succeeds, you can distribute it to all other cluster nodes. The procedure for removing cluster resources and services is as follows: Remove the desired cluster resources and services and increase the config_version number. Validate the CMAN configuration file. Distribute the CMAN configuration file to all other nodes. We can proceed to remove the webserver cluster service from our example cluster configuration. Edit the CMAN configuration file and remove the webserver cluster service definition. Remember to increase the config_version number. Validate your cluster configuration with every CMAN configuration file change. In this screenshot, we can see the command used to validate the CMAN configuration: When your cluster configuration is valid, you can distribute the CMAN configuration file to all other cluster nodes. In the following screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Once the cluster configuration is distributed to all cluster nodes, the webserver cluster service will be stopped and removed. The clustat command shows no service configured and running. In the following screenshot, we can see that the output of the clustat command shows no cluster service called webserver existing in the cluster: Summary In this article, you learned how to add and remove cluster failover domains, cluster resources, and cluster services. We also learned how to start, stop, and migrate cluster services from one cluster node to another, and how to remove cluster resources and services from a running cluster configuration. Resources for Article: Further resources on this subject: Replication [article] Managing public and private groups [article] Installing CentOS [article]
Read more
  • 0
  • 0
  • 4162

article-image-profiling-app
Packt
24 Apr 2015
9 min read
Save for later

Profiling an app

Packt
24 Apr 2015
9 min read
This article is written by Cecil Costa, the author of the book, Swift Cookbook. We'll delve into what profiling is and how we can profile an app by following some simple steps. It's very common to hear about issues, but if an app doesn't have any important issue, it doesn't mean that it is working fine. Imagine that you have a program that has a memory leak, presumably you won't find any problem using it for 10 minutes. However, a user may find it after using it for a few days. Don't think that this sort of thing is impossible; remember that iOS apps don't terminate, so if you do have memory leaks, it will be kept until your app blows up. Performance is another important, common topic. What if your app looks okay, but it gets slower with the passing of time? We, therefore, have to be aware of this problem. This kind of test is called profiling and Xcode comes with a very good tool for realizing this operation, which is called Instruments. In this instance, we will profile our app to visualize the amount of energy wasted by our app and, of course, let's try to reduce it. (For more resources related to this topic, see here.) Getting ready For this recipe you will need a physical device, and to install your app into the device you will need to be enrolled on the Apple Developer Program. If you have both the requirements, the next thing you have to do is create a new project called Chapter 7 Energy. How to do it... To profile an app, follow these steps: Before we start coding, we will need to add a framework to the project. Click on the Build Phases tab of your project, go to the Link Binaries with Libraries section, and press the plus sign. Once Xcode opens a dialog window asking for the framework to add, choose CoreLocation and MapKit. Now, go to the storyboard and place a label and a MapKit view. You might have a layout similar to this one: Link the MapKit view and call it just map and the UILabel class and call it just label:    @IBOutlet var label: UILabel!    @IBOutlet var map: MKMapView! Continue with the view controller; let's click at the beginning of the file to add the core location and MapKit imports: import CoreLocation import MapKit After this, you have to initialize the location manager object on the viewDidLoad method:    override func viewDidLoad() {        super.viewDidLoad()        locationManager.delegate = self        locationManager.desiredAccuracy =          kCLLocationAccuracyBest        locationManager.requestWhenInUseAuthorization()        locationManager.startUpdatingLocation()    } At the moment, you may get an error because your view controller doesn't conform with CLLocationManagerDelegate, so let's go to the header of the view controller class and specify that it implements this protocol. Another error we have to deal with is the locationManager variable, because it is not declared. Therefore, we have to create it as an attribute. And as we are declaring attributes, we will add the geocoder, which will be used later: class ViewController: UIViewController, CLLocationManagerDelegate {    var locationManager = CLLocationManager()    var geocoder = CLGeocoder() Before we implement this method that receives the positioning, let's create another method to detect whether there was any authorization error:    func locationManager(manager: CLLocationManager!,       didChangeAuthorizationStatus status:          CLAuthorizationStatus) {            var locationStatus:String            switch status {            case CLAuthorizationStatus.Restricted:                locationStatus = "Access: Restricted"               break            case CLAuthorizationStatus.Denied:                locationStatus = "Access: Denied"                break            case CLAuthorizationStatus.NotDetermined:                locationStatus = "Access: NotDetermined"               break            default:                locationStatus = "Access: Allowed"            }            NSLog(locationStatus)    } And then, we can implement the method that will update our location:    func locationManager(manager:CLLocationManager,      didUpdateLocations locations:[AnyObject]) {        if locations[0] is CLLocation {            let location:CLLocation = locations[0] as              CLLocation            self.map.setRegion(              MKCoordinateRegionMakeWithDistance(            location.coordinate, 800,800),              animated: true)                       geocoder.reverseGeocodeLocation(location,              completionHandler: { (addresses,              error) -> Void in                    let placeMarket:CLPlacemark =                      addresses[0] as CLPlacemark                let curraddress:String = (placeMarket.                  addressDictionary["FormattedAddressLines"                  ] as [String]) [0] as String                    self.label.text = "You are at                      (curraddress)"            })        }    } Before you test the app, there is still another step to follow. In your project navigator, click to expand the supporting files, and then click on info.plist. Add a row by right-clicking on the list and selecting add row. On this new row, type NSLocationWhenInUseUsageDescription as a key and on value Permission required, like the one shown here: Now, select a device and install this app onto it, and test the application walking around your street (or walking around the planet earth if you want) and you will see that the label will change, and also the map will display your current position. Now, go back to your computer and plug the device in again. Instead of clicking on play, you have to hold the play button until you see more options and then you have to click on the Profile option. The next thing that will happen is that instruments will be opened; probably, a dialog will pop up asking for an administrator account. This is due to the fact that instruments need to use some special permission to access some low-level information. On the next dialog, you will see different kinds of instruments, some of them are OS X specific, some are iOS specific, and others are for both. If you choose the wrong platform instrument, the record button will be disabled. For this recipe, click on Energy Diagnostics. Once the Energy Diagnostics window is open, you can click on the record button, which is on the upper-left corner and try to move around—yes, you need to keep the device connected to your computer, so you have to move around with both elements together—and do some actions with your device, such as pressing the home button and turning off the screen. Now, you may have a screen that displays an output similar to this one: Now, you can analyze who is spending more energy on you app. To get a better idea of this, go to your code and replace the constant kCLLocationAccuracyBest with kCLLocationAccuracyThreeKilometers and check whether you have saved some energy. How it works... Instruments are a tool used for profiling your application. They give you information about your app which can't be retrieved by code, or at least can't be retrieved easily. You can check whether your app has memory leaks, whether it is loosing performance, and as you can see, whether it is wasting lots of energy or not. In this recipe we used the GPS because it is a sensor that requires some energy. Also, you can check on the table at the bottom of your instrument to see that Internet requests were completed, which is something that if you do very frequently will also empty your battery fast. Something you might be asking is: why did we have to change info.plist? Since iOS 8, some sensors require user permission; the GPS is one of them, so you need to report what is the message that will be shown to the user. There's more... I recommend you to read the way instruments work, mainly those that you will use. Check the Apple documentation about instruments to get more details about this (https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Introduction/Introduction.html). Summary In this article, we looked at all the hows and whats of profiling an app. We specifically looked at profiling our app to visualize the amount of energy wasted by our app. So, go ahead to try doing it. Resources for Article: Further resources on this subject: Playing with Swift [article] Using OpenStack Swift [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 8841
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-project-management
Packt
24 Apr 2015
17 min read
Save for later

Project Management

Packt
24 Apr 2015
17 min read
In this article by Patrick Li, author of the book JIRA Essentials - Third Edition, we will start with a high-level view of the overall hierarchy on how data is structured in JIRA. We will then take a look at the various user interfaces that JIRA has for working with projects, both as an administrator and an everyday user. We will also introduce permissions for the first time in the context of projects and will expand on this. In this article, you will learn the following: How JIRA structures content Different user interfaces for project management in JIRA How to create new projects in JIRA How to import data from other systems into JIRA How to manage and configure a project How to manage components and versions (For more resources related to this topic, see here.) The JIRA hierarchy Like most other information systems, JIRA organizes its data in a hierarchical structure. At the lowest level, we have field, which are used to hold raw information. Then the next level up, we have issue, which are like a unit of work to be performed. An issue will belong to a project, which defines the context of the issue. Finally, we have project category, which logically group similar projects together. The figure in the following illustrates the hierarchy we just talked about: Project category Project category is a logical grouping for projects, usually of similar nature. Project category is optional. Projects do not have to belong to any category in JIRA. When a project does not belong to any category, it is considered uncategorized. The categories themselves do not contain any information; they serve as a way to organize all your projects in JIRA, especially when you have many of them. Project In JIRA, a project is a collection of issues. Projects provide the background context for issues by letting users know where issues should be created. Users will be members of a project, working on issues in the project. Most configurations in JIRA, such as permissions and screen layouts, are all applied on the project level. It is important to remember that projects are not limited to software development projects that need to deliver a product. They can be anything logical, such as the following: Company department or team Software development projects Products or systems A risk register Issue Issues represent work to be performed. From a functional perspective, an issue is the base unit for JIRA. Users create issues and assign them to other people to be worked on. Project leaders can generate reports on issues to see how everything is tracking. In a sense, you can say JIRA is issue-centric. Here, you just need to remember three things: An issue can belong to only one project There can be many different types of issues An issue contains many fields that hold values for the issue Field Fields are the most basic unit of data in JIRA. They hold data for issues and give meaning to them. Fields in JIRA can be broadly categorized into two distinctive categories, namely, system fields and custom fields. They come in many different forms, such as text fields, drop-down lists, and user pickers. Here, you just need to remember three things: Fields hold values for issues Fields can have behaviors (hidden or mandatory) Fields can have a view and structure (text field or drop-down list) Project permissions Before we start working with projects in JIRA, we need to first understand a little bit about permissions. We will briefly talk about the permissions related to creating and deleting, administering, and browsing projects. In JIRA, users with the JIRA administrator permission will be able to create and delete projects. By default, users in the jira-administrators group have this permission, so the administrator user we created during the installation process will be able to create new projects. We will be referring to this user and any other users with this permission as JIRA Administrator. For any given project, users with the Administer Project permission for that project will be able to administer the project's configuration settings. This allows them to update the project's details, manage versions and components, and decide who will be able to access this project. We will be referring to users with this permission as the Project Administrator. By default, the JIRA Administrator will have this permission. If a user needs to browse the contents of a given project, then he must have the Browse Project permission for that project. This means that the user will have access to the Project Browser interface for the project. By default, the JIRA Administrator will have this permission. As you have probably realized already, one of the key differences in the three permissions is that the JIRA Administrator's permission is global, which means it is global across all projects in JIRA. The Administer Project and Browse Project permissions are project-specific. A user may have the Administer Project permission for project A, but only Browse Project permission for project B. As we will see the separation of permissions allows you to set up your JIRA instance in such a way that you can effectively delegate permission controls, so you can still centralize control on who can create and delete projects, but not get over-burdened with having to manually manage each project on its own settings. Now with this in mind, let's first take a quick look at JIRA from the JIRA Administrator user's view. Creating projects To create a new project, the easiest way is to select the Create Project menu option from the Projects drop-down menu from the top navigation bar. This will bring up the create project dialog. Note that, as we explained, you need to be a JIRA Administrator (such as the user we created during installation) to create projects. This option is only available if you have the permission. When creating a new project in JIRA, we need to first select the type of project we want to create, from a list of project templates. Project template, as the name suggests, acts as a blueprint template for the project. Each project template has a predefined set of configurations such as issue type and workflows. For example, if we select the Simple Issue Tracking project template, and click on the Next button. JIRA will show us the issue types and workflow for the Simple Issue Tracking template. If we select a different template, then a different set of configurations will be applied. For those who have been using JIRA since JIRA 5 or earlier, JIRA Classic is the template that has the classic issue types and classic JIRA workflow. Clicking on the Select button will accept and select the project template. For the last step, we need to provide the new project's details. JIRA will help you validate the details, such as making sure the project key conforms to the configured format. After filling in the project details, click on the Submit button to create the new project. The following table lists the information you need to provide when creating a new project: Field Description Name A unique name for the project. Key A unique identity key for the project. As you type the name of your project, JIRA will auto-fill the key based on the name, but you can change the autogenerated key with one of your own. Starting from JIRA 6.1, the project key is not changeable after the project is created. The project key will also become the first part of the issue key for issues created in the project. Project Lead The lead of the project can be used to auto-assign issues. Each project can have only one lead. This option is available only if you have more than one user in JIRA. Changing the project key format When creating new projects, you may find that the project key needs to be in a specific format and length. By default, the project key needs to adhere to the following criteria: Contain at least two characters Cannot be more than 10 characters in length Contain only characters, that is, no numbers You can change the default format to have less restrictive rules. These changes are for advanced users only. First, to change the project key length perform the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Edit Settings button. Change the value for the Maximum project key size option to a value between 2 and 255 (inclusive), and click on the Update button to apply the change. Changing the project key format is a bit more complicated. JIRA uses a regular expression to define what the format should be. To change the project key format use the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Advanced Settings button. Hover over and click on the value (([A-Z][A-Z]+)) for the jira.projectkey.pattern option. Enter the new regular expression for the project key format, and click on Update. There are a few rules when it comes to setting the project key format: The key must start with a letter All letters must be uppercase, that is (A-Z) Only letters, numbers, and the underscore character can be used Importing data into JIRA JIRA supports importing data directly from many popular issue-tracking systems, such as Bugzilla, GitHub, and Trac. All the importers have a wizard-driven interface, guiding you through a series of steps. These steps are mostly identical with few differences. Generally speaking, there are four steps when importing data into JIRA that are as follows: Select your source data. For example, if you are importing from CSV, it will select the CSV file. If you are importing from Bugzilla, it will provide Bugzilla database details. Select a destination project where imported issues will go into. This can be an existing project or a new project created on the fly. Map old system fields to JIRA fields. Map old system field values to JIRA field values. This is usually required for select-based fields, such as the priority field, or select list custom fields. Importing data through CSV JIRA comes with a CSV importer, which lets you import data in the comma-separated value format. This is a useful tool if you want to import data from a system that is not directly supported by JIRA, since most systems are able to export their data in CSV. It is recommended to do a trial import on a test instance first. Use the following steps to import data through a CSV file: Select the Import External Project option from the Projects drop-down menu. Click on the Import from Comma-separated Values (CSV) importer option. This will start the import wizard. First you need to select the CSV file that contains the data you want to import, by clicking on the Choose File button. After you selected the source file, you can also expand the Advanced section to select the file encoding and delimiter used in the CSV file. There is also a Use an existing configuration option, we will talk about this later in this section. Click on the Next button to proceed. For the second step, you need to select the project you want to import our data into. You can also select the Create New option to create a new project on the fly. If your CSV file contains date-related data, make sure you enter the format used in the Date format field. Click on the Next button to proceed. For the third step, you need to map the CSV fields to the fields in JIRA. Not all fields need to be mapped. If you do not want to import a particular field, simply leave it as Don't map this field for the corresponding JIRA field selection. For fields that contain data that needs to be mapped manually, such as for select list fields, you need to check the Map field value option. This will let you map the CSV field value to the JIRA field value, so they can be imported correctly. If you do not manually map these values, they will be copied over as is. Click on the Next button to proceed. For the last step, you need to map the CSV field value to the JIRA field value. This step is only required if you have checked the Map field value option for a field in step 10. Enter the JIRA field value for each CSV field value. Once you are done with mapping field values, click on the Begin Import button to start the actual import process. Depending on the size of your data, this may take some time to complete. Once the import process completes, you will get a confirmation message that tells you the number of issues that have been imported. This number should match the number of records you have in the CSV file. On the last confirmation screen, you can click on the download a detailed log link to download the full log file containing all the information for the import process. This is particularly useful if the import was not successful. You can also click on the save the configuration link, which will generate a text file containing all the mappings you have done for this import. Even if you need to run a similar import in the future, you can use this import file so that you will not need to manually re-map everything again. To use this configuration file, check the Use an existing configuration file option in step one. As we can see, JIRA's project importer makes importing data from other systems simple and straightforward. However, you must not underestimate its complexity. For any data migration, especially if you are moving off one platform and onto a new one, such as JIRA, there are a number of factors you need to consider and prepare for. The following list summarizes some of the common tasks for most data migrations: Evaluate the size and impact. This includes how many records you will be importing and also the number of users that will be impacted by this. Perform a full gap analysis between the old system and JIRA, such as how the fields will map from one to the other. Set up test environments for you to run test imports on to make sure you have your mappings done correctly. Involve your end users as early as possible, and have them review your test results. Prepare and communicate any outages and support procedure post-migration. Project user interfaces There are two distinctive interfaces for projects in JIRA. The first interface is designed for everyday users, providing useful information on how the project is going with graphs and statistics, called Project Browser. The second interface is designed for project administrators to control project configuration settings, such as permissions and workflows, called Project Administration. The Help Desk project In this exercise, we will be setting up a project for our support teams: A new project category for all support teams A new project for our help desk support team Components for the systems supported by the team Versions to better manage issues created by users Creating a new project category Let's start by creating a project category. We will create a category for all of our internal support teams and their respective support JIRA projects. Please note that this step is optional as JIRA does not require any project to belong to a project category: Log in to JIRA with a user who has JIRA Administrator's permission. Browse to the JIRA administration console. Select the Projects tab and Project Categories. Fill in the fields as shown in the following screenshot. Click on Add to create the new project category. Creating a new project Now that we have a project category created, let's create a project for our help desk support team. To create a new project, perform the following steps: Bring up the create project dialog by selecting the Create Project option from the Projects drop-down menu. Select the Simple Issue Tracking project template. Name our new project as Global Help Desk and accept the other default values for Key and Project Lead. Click on the Submit button to create the new project. You should now be taken to the Project Browser interface of your new project. Assigning a project to a category Having created the new project, you need to assign the new project to your project category, and you can do this from the Project Administration interface: Select the Administration tab. Click on the None link next to Category, on the top left of the page, right underneath the project's name. Select the new Support project category we just created. Click on Select to assign the project. Creating new components As discussed in the earlier sections, components are subsections of a project. This makes logical sense for a software development project, where each component will represent a software deliverable module. For other types of project, components may first appear useless or inappropriate. It is true that components are not for every type of project out there, and this is the reason why you are not required to have them by default. Just like everything else in JIRA, all the features come from how you can best map them to your business needs. The power of a component is more than just a flag field for an issue. For example, let's imagine that the company you are working for has a range of systems that need to be supported. These may range from phone systems and desktop computers to other business applications. Let's also assume that our support team needs to support all of the systems. Now, that is a lot of systems to support. To help manage and delegate support for these systems, we will create a component for each of the systems that the help desk team supports. We will also assign a lead for each of the components. This setup allows us to establish a structure where the Help Desk project is led by the support team lead, and each component is led by their respective system expert (who may or may not be the same as the team lead). This allows for a very flexible management process when we start wiring in other JIRA features, such as notification schemes: From the Project Administration interface, select the Components tab. Type Internal Phone System for the new component's name. Provide a short description for the new component. Select a user to be the lead of the component. Click on Add to create the new component. Add a few more components. Putting it together Now that you have fully prepared your project, let's see how everything comes together by creating an issue. If everything is done correctly, you should see a dialog box similar to the next screenshot, where you can choose your new project to create the issue in and also the new components that are available for selection: Click on the Create button from the top navigation bar. This will bring up the Create Issue dialog box. Select Global Help Desk for Project. Select Task for Issue Type, and click on the Next button. Fill in the fields with some dummy data. Note that the Component/s field should display the components we just created. Click on the Create button to create the issue. You can test out the default assignee feature by leaving the Assignee field as Automatic, select a component, and JIRA will automatically assign the issue to the default assignee defined for the component. If everything goes well, the issue will be created in the new project. Summary In this article, we looked at one of the most important concepts in JIRA, projects, and how to create and manage them. Permissions were introduced for the first time, and we looked at three permissions that are related to creating and deleting, administering, and browsing projects. We were introduced to the two interfaces JIRA provides for project administrators and everyday users, the Project Administration interface and Project Browser interface, respectively. Resources for Article: Further resources on this subject: Securing your JIRA 4 [article] Advanced JIRA 5.2 Features [article] Validating user input in workflow transitions [article]
Read more
  • 0
  • 0
  • 2761

article-image-advanced-playbooks
Packt
24 Apr 2015
10 min read
Save for later

Advanced Playbooks

Packt
24 Apr 2015
10 min read
In this article by the author, Daniel Hall, of the book, Ansible Configuration Management - Second Edition, we learn to start digging a bit deeper into playbooks. We will be covering the following topics: External data lookups Storing results Processing data Debugging playboks (For more resources related to this topic, see here.) External data lookups Ansible introduced the lookup plugins in version 0.9. These plugins allow Ansible to fetch data from outside sources. Ansible provides several plugins, but you can also write your own. This really opens the doors and allows you to be flexible in your configuration. Lookup plugins are written in Python and run on the controlling machine. They are executed in two different ways: direct calls and with_* keys. Direct calls are useful when you want to use them like you would use variables. Using the with_* keys is useful when you want to use them as loops. In the next example, we use a lookup plugin directly to get the http_proxy value from environment and send it through to the configured machine. This makes sure that the machines we are configuring will use the same proxy server to download the file. --- - name: Downloads a file using a proxy hosts: all tasks:    - name: Download file      get_url:        dest: /var/tmp/file.tar.gz         url: http://server/file.tar.gz      environment:        http_proxy: "{{ lookup('env', 'http_proxy') }}" You can also use lookup plugins in the variable section. This doesn't immediately lookup the result and put it in the variable as you might assume; instead, it stores it as a macro and looks it up every time you use it. This is good to know if you are using something, the value of which might change over time. Using lookup plugins in the with_* form will allow you to iterate over things you wouldn't normally be able to. You can use any plugin like this, but ones that return a list are most useful. In the following code, we show how to dynamically register a webapp farm. --- - name: Registers the app server farm hosts: localhost connection: local vars:    hostcount: 5 tasks:    - name: Register the webapp farm      local_action: add_host name={{ item }} groupname=webapp      with_sequence: start=1 end={{ hostcount }} format=webapp%02x If you were using this example, you would append a task to create each as a virtual machine and then a new play to configure each of them. Situations where lookup plugins are useful are as follows: Copying a whole directory of Apache config to a conf.d style directory Using environment variables to adjust what the playbooks does Getting configuration from DNS TXT records Fetching the output of a command into a variable Storing results Almost every module outputs something, even the debug module. Most of the time, the only variable used is the one named changed. The changed variable helps Ansible decide whether to run handlers or not and which color to print the output in. However, if you wish to, you can store the returned values and use them later in the playbook. In this example, we look at the mode in the /tmp directory and create a new directory named /tmp/subtmp with the same mode as shown here. --- - name: Using register hosts: ansibletest user: root tasks:    - name: Get /tmp info    file:        dest: /tmp        state: directory      register: tmp      - name: Set mode on /var/tmp      file:        dest: /tmp/subtmp        mode: "{{ tmp.mode }}"        state: directory Some modules, such as the file module in the previous example, can be configured to simply give information. By combining this with the register feature, you can create playbooks that can examine the environment and calculate how to proceed. Combining the register feature and the set_fact module allows you to perform data processing on data you receive back from the modules. This allows you to compute values and perform data processing on these values. This makes your playbooks even smarter and more flexible than ever. Register allows you to make your own facts about hosts from modules already available to you. This can be useful in many different circumstances: Getting a list of files in a remote directory and downloading them all with fetch Running a task when a previous task changes, before the handlers run Getting the contents of the remote host SSH key and building a known_hosts file Processing data Ansible uses Jinja2 filters to allow you to transform data in ways that aren't possible with basic templates. We use filters when the data available to us in our playbooks is not in the format we want, or require further complex processing before it can be used with modules or templates. Filters can be used anywhere we would normally use a variable, such as in templates, as arguments to modules, and in conditionals. Filters are used by providing the variable name, a pipe character, and then the filter name. We can use multiple filter names separated with pipe characters to use multiple pipes, which are then applied left to right. Here is an example where we ensure that all users are created with lowercase usernames: --- - name: Create user accounts hosts: all vars:    users: tasks:    - name: Create accounts      user: name={{ item|lower }} state=present      with_items:        - Fred        - John        - DanielH Here are a few popular filters that you may find useful: Filter Description min When the argument is a list it returns only the smallest value. max When the argument is a list it returns only the largest value. random When the argument is a list it picks a random item from the list. changed When used on a variable created with the register keyword, it returns true if the task changed anything; otherwise, it returns false. failed When used on a variable created with the register keyword, it returns true if the task failed; otherwise, it returns false. skipped When used on a variable created with the register keyword, it returns true if the task changed anything; otherwise, it returns false. default(X) If the variable does not exist, then the value of X will be used instead. unique When the argument is a list, return a list without any duplicate items. b64decode Convert the base64 encoded string in the variable to its binary representation. This is useful with the slurp modules, as it returns its data as a base64 encoded string. replace(X, Y) Return a copy of the string with any occurrences of X replaced by Y. join(X) When the variable is a list, return a string with all the entries separated by X. Debugging playbooks There are a few ways in which you can debug a playbook. Ansible includes both a verbose mode and a debug module specifically for debugging. You can also use modules such as fetch and get_url for help. These debugging techniques can also be used to examine how modules behave when you wish to learn how to use them. The debug module Using the debug module is really quite simple. It takes two optional arguments, msg and fail.msg to set the message that will be printed by the module and fail, if set to yes, indicates a failure to Ansible, which will cause it to stop processing the playbook for that host. We used this module earlier in the skipping modules section to bail out of a playbook if the operating system was not recognized. In the following example, we will show how to use the debug module to list all the interfaces available on the machine: --- - name: Demonstrate the debug module hosts: ansibletest user: root vars:    hostcount: 5 tasks:    - name: Print interface      debug:        msg: "{{ item }}"      with_items: ansible_interfaces The preceding code gives the following output: PLAY [Demonstrate the debug module] *********************************   GATHERING FACTS ***************************************************** ok: [ansibletest]   TASK: [Print interface] ********************************************* ok: [ansibletest] => (item=lo) => {"item": "lo", "msg": "lo"} ok: [ansibletest] => (item=eth0) => {"item": "eth0", "msg": "eth0"}   PLAY RECAP ********************************************************** ansibletest               : ok=2   changed=0   unreachable=0   failed=0 As you can see, the debug module is easy to use to see the current value of a variable during the play. The verbose mode Your other option for debugging is the verbose option. When running Ansible with verbose, it prints out all the values that were returned by each module after it runs. This is especially useful if you are using the register keyword introduced in the previous section. To run ansible-playbook in verbose mode, simply add --verbose to your command line as follows: ansible-playbook --verbose playbook.yml The check mode In addition to the verbose mode, Ansible also includes a check mode and a diff mode. You can use the check mode by adding --check to the command line, and --diff to use the diff mode. The check mode instructs Ansible to walk through the play without actually making any changes to remote systems. This allows you to obtain a listing of the changes that Ansible plans to make to the configured system. It is important here to note that the check mode of Ansible is not perfect. Any modules that do not implement the check feature are skipped. Additionally, if a module is skipped that provides more variables, or the variables depend on a module actually changing something (such as file size), then they will not be available. This is an obvious limitation when using the command or shell modules The diff mode shows the changes that are made by the template module. This limitation is because the template file only works with text files. If you were to provide a diff of a binary file from the copy module, the result would almost be unreadable. The diff mode also works with the check mode to show you the planned changes that were not made due to being in check mode. The pause module Another technique is to use the pause module to pause the playbook while you examine the configured machine as it runs. This way, you can see changes that the modules have made at the current position in the play, and then watch while it continues with the rest of the play. Summary In this article, we explored the more advanced details of writing playbooks. You should now be able to use features such as delegation, looping, conditionals, and fact registration to make your plays much easier to maintain and edit. We also looked at how to access information from other hosts, configure the environment for a module, and gather data from external sources. Finally, we covered some techniques for debugging plays that are not behaving as expected. Resources for Article: Further resources on this subject: Static Data Management [article] Drupal 8 and Configuration Management [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 2966

article-image-recording-your-first-test
Packt
24 Apr 2015
17 min read
Save for later

Recording Your First Test

Packt
24 Apr 2015
17 min read
JMeter comes with a built-in test script recorder, also referred to as a proxy server (http://en.wikipedia.org/wiki/Proxy_server), to aid you in recording test plans. The test script recorder, once configured, watches your actions as you perform operations on a website, creates test sample objects for them, and eventually stores them in your test plan, which is a JMX file. In addition, JMeter gives you the option to create test plans manually, but this is mostly impractical for recording nontrivial testing scenarios. You will save a whole lot of time using the proxy recorder, as you will be seeing in a bit. So without further ado, in this article by Bayo Erinle, author of Performance Testing with JMeter - Second Edition, let's record our first test! For this, we will record the browsing of JMeter's own official website as a user will normally do. For the proxy server to be able to watch your actions, it will need to be configured. This entails two steps: Setting up the HTTP(S) Test Script Recorder within JMeter. Setting the browser to use the proxy. (For more resources related to this topic, see here.) Configuring the JMeter HTTP(S) Test Script Recorder The first step is to configure the proxy server in JMeter. To do this, we perform the following steps: Start JMeter. Add a thread group, as follows: Right-click on Test Plan and navigate to Add | Threads (User) | Thread Group. Add the HTTP(S) Test Script Recorder element, as follows: Right-click on WorkBench and navigate to Add | Non-Test Elements | HTTP(S) Test Script Recorder. Change the port to 7000 (1) (under Global Settings). You can use a different port, if you choose to. What is important is to choose a port that is not currently used by an existing process on the machine. The default is 8080. Under the Test plan content section, choose the option Test Plan > Thread Group (2) from the Target Controller drop-down. This allows the recorded actions to be targeted to the thread group we created in step 2. Under the Test plan content section, choose the option Put each group in a new transaction controller (3) from the Grouping drop-down. This allows you to group a series of requests constituting a page load. We will see more on this topic later. Click on Add suggested Excludes (under URL Patterns to Exclude). This instructs the proxy server to bypass recording requests of a series of elements that are not relevant to test execution. These include JavaScript files, stylesheets, and images. Thankfully, JMeter provides a handy button that excludes the often excluded elements. Click on the Start button at the bottom of the HTTP(S) Test Script Recorder component. Accept the Root CA certificate by clicking on the OK button. With these settings, the proxy server will start on port 7000, and monitor all requests going through that port and record them to a test plan using the default recording controller. For details, refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder   In older versions of JMeter (before version 2.10), the now HTTP(S) Test Script Recorder was referred to as HTTP Proxy Server. While we have configured the HTTP(S) Test Script Recorder manually, the newer versions of JMeter (version 2.10 and later) come with prebundled templates that make commonly performed tasks, such as this, a lot easier. Using the bundled recorder template, we can set up the script recorder with just a few button clicks. To do this, click on the Templates…(1) button right next to the New file button on the toolbar. Then select Select Template as Recording (2). Change the port to your desired port (for example, 7000) and click on the Create (3) button. Refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder through the template Recorder Setting up your browser to use the proxy server There are several ways to set up the browser of your choice to use the proxy server. We'll go over two of the most common ways, starting with my personal favorite, which is using a browser extension. Using a browser extension Google Chrome and Firefox have vibrant browser plugin ecosystems that allow you to extend the capabilities of your browser with each plugin that you choose. For setting up a proxy, I really like FoxyProxy (http://getfoxyproxy.org/). It is a neat add-on to the browser that allows you to set up various proxy settings and toggle between them on the fly without having to mess around with setting systems on the machine. It really makes the work hassle free. Thankfully, FoxyProxy has a plugin for Internet Explorer, Chrome, and Firefox. If you are using any of these, you are lucky! Go ahead and grab it! Changing the machine system settings For those who would rather configure the proxy natively on their operating system, we have provided the following steps for Windows and Mac OS. On Windows OS, perform the following steps for configuring a proxy: Click on Start, then click on Control Panel. Click on Network and Internet. Click on Internet Options. In the Internet Options dialog box, click on the Connections tab. Click on the Local Area Network (LAN) Settings button. To enable the use of a proxy server, select the checkbox for Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections), as shown in the following screenshot. In the proxy Address box, enter localhost in the IP address. In the Port number text box, enter 7000 (to match the port you set up for your JMeter proxy earlier). If you want to bypass the proxy server for local IP addresses, select the Bypass proxy server for local addresses checkbox. Click on OK to complete the proxy configuration process. Manually setting proxy on Windows 7 On Mac OS, perform the following steps to configure a proxy: Go to System Preference. Click on Network. Click on the Advanced… button. Go to the Proxies tab. Select the Web Proxy (HTTP) checkbox. Under Web Proxy Server, enter localhost. For port, enter 7000 (to match the port you set up for your JMeter proxy earlier). Do the same for Secure Web Proxy (HTTPS). Click on OK. Manually setting proxy on Mac OS For all other systems, please consult the related operating system documentation. Now that is all out of the way and the connections have been made, let's get to recording using the following steps: Point your browser to http://jmeter.apache.org/. Click on the Changes link under About. Click on the User Manual link under Documentation. Stop the HTTP(S) Test Script Recorder by clicking on the Stop button, so that it doesn't record any more activities. If you have done everything correctly, your actions will be recorded under the test plan. Refer to the following screenshot for details. Congratulations! You have just recorded your first test plan. Admittedly, we have just scrapped the surface of recording test plans, but we are off to a good start. Recording your first scenario Running your first recorded scenario We can go right ahead and replay or run our recorded scenario now, but before that let's add a listener or two to give us feedback on the results of the execution. There is no limit to the amount of listeners we can attach to a test plan, but we will often use only one or two. For our test plan, let's add three listeners for illustrative purposes. Let's add a Graph Results listener, a View Results Tree listener, and an Aggregate Report listener. Each listener gathers a different kind of metric that can help analyze performance test results as follows: Right-click on Test Plan and navigate to Add | Listener | View Results Tree. Right-click on Test Plan and navigate to Add | Listener | Aggregate Report. Right-click on Test Plan and navigate to Add | Listener | Graph Results. Just as we can see more interesting data, let's change some settings at the thread group level, as follows: Click on Thread Group. Under Thread Properties set the values as follows:     Number of Threads (users): 10     Ramp-Up Period (in seconds): 15     Loop Count: 30 This will set our test plan up to run for ten users, with all users starting their test within 15 seconds, and have each user perform the recorded scenario 30 times. Before we can proceed with test execution, save the test plan by clicking on the save icon. Once saved, click on the start icon (the green play icon on the menu) and watch the test run. As the test runs, you can click on the Graph Results listener (or any of the other two) and watch results gathering in real time. This is one of the many features of JMeter. From the Aggregate Report listener, we can deduce that there were 600 requests made to both the changes link and user manual links, respectively. Also, we can see that most users (90% Line) got very good responses below 200 milliseconds for both. In addition, we see what the throughput is per second for the various links and see that there were no errors during our test run. Results as seen through this Aggregate Report listener Looking at the View Results Tree listener, we can see exactly the changes link requests that failed and the reasons for their failure. This can be valuable information to developers or system engineers in diagnosing the root cause of the errors.   Results as seen via the View Results Tree Listener The Graph Results listener also gives a pictorial representation of what is seen in the View Tree listener in the preceding screenshot. If you click on it as the test goes on, you will see the graph get drawn in real time as the requests come in. The graph is a bit self-explanatory with lines representing the average, median, deviation, and throughput. The Average, Median, and Deviation all show average, median, and deviation of the number of samplers per minute, respectively, while the Throughput shows the average rate of network packets delivered over the network for our test run in bits per minute. Please consult a website, for example, Wikipedia for further detailed explanation on the precise meanings of these terms. The graph is also interactive and you can go ahead and uncheck/check any of the irrelevant/relevant data. For example, we mostly care about the average and throughput. Let's uncheck Data, Median, and Deviation and you will see that only the data plots for Average and Throughput remain. Refer to the following screenshot for details. With our little recorded scenario, you saw some major components that constitute a JMeter test plan. Let's record another scenario, this time using another application that will allow us to enter form values. Excilys Bank case study We'll borrow a website created by the wonderful folks at Excilys, a company focused on delivering skills and services in IT (http://www.excilys.com/). It's a light banking web application created for illustrative purposes. Let's start a new test plan, set up the test script recorder like we did previously, and start recording. Results as seen through this Graph Results Listener Let's start with the following steps: Point your browser to http://excilysbank.aws.af.cm/public/login.html. Enter the username and password in the login form, as follows: Username: user1 Password: password1 Click on the PERSONNAL CHECKING link. Click on the Transfers tab. Click on My Accounts. Click on the Joint Checking link. Click on the Transfers tab. Click on the Cards tab. Click on the Operations tab. Click on the Log out button. Stop the proxy server by clicking on the Stop button. This concludes our recorded scenario. At this point, we can add listeners for gathering results of our execution and then replay the recorded scenario as we did earlier. If we do, we will be in for a surprise (that is, if we don't use the bundled recorder template). We will have several failed requests after login, since we have not included the component to manage sessions and cookies needed to successfully replay this scenario. Thankfully, JMeter has such a component and it is called HTTP Cookie Manager. This seemingly simple, yet powerful component helps maintain an active session through HTTP cookies, once our client has established a connection with the server after login. It ensures that a cookie is stored upon successful authentication and passed around for subsequent requests, hence allowing those to go through. Each JMeter thread (that is, user) has its own cookie storage area. That is vital since you won't want a user gaining access to the site under another user's identity. This becomes more apparent when we test for websites requiring authentication and authorization (like the one we just recorded) for multiple users. Let's add this to our test plan by right-clicking on Test Plan and navigating to Add | Config Element | HTTP Cookie Manager. Once added, we can now successfully run our test plan. At this point, we can simulate more load by increasing the number of threads at the thread group level. Let's go ahead and do that. If executed, the test plan will now pass, but this is not realistic. We have just emulated one user, repeating five times essentially. All threads will use the credentials of user1, meaning that all threads log in to the system as user1. That is not what we want. To make the test realistic, what we want is each thread authenticating as a different user of the application. In reality, your bank creates a unique user for you, and only you or your spouse will be privileged to see your account details. Your neighbor down the street, if he used the same bank, won't get access to your account (at least we hope not!). So with that in mind, let's tweak the test to accommodate such a scenario. Parameterizing the script We begin by adding a CSV Data Set Config component (Test Plan | Add | Config Element | CSV Data Set Config) to our test plan. Since it is expensive to generate unique random values at runtime due to high CPU and memory consumption, it is advisable to define that upfront. The CSV Data Set Config component is used to read lines from a file and split them into variables that can then be used to feed input into the test plan. JMeter gives you a choice for the placement of this component within the test plan. You would normally add the component at the HTTP request level of the request that needs values fed from it. In our case, this will be the login HTTP request, where the username and password are entered. Another is to add it at the thread group level, that is, as a direct child of the thread group. If a particular dataset is applied to only a thread group, it makes sense to add it at this level. The third place where this component can be placed is at the Test Plan root level. If a dataset applies to all running threads, then it makes sense to add it at the root level. In our opinion, this also makes your test plans more readable and maintainable, as it is easier to see what is going on when inspecting or troubleshooting a test plan since this component can easily be seen at the root level rather than being deeply nested at other levels. So for our scenario, let's add this at the Test Plan root level. You can always move the components around using drag and drop even after adding them to the test plan. CSV Data Set Config Once added, the Filename entry is all that is needed if you have included headers in the input file. For example, if the input file is defined as follows: user, password, account_id user1, password1, 1 If the Variable Names field is left blank, then JMeter will use the first line of the input file as the variable names for the parameters. In cases where headers are not included, the variable names can be entered here. The other interesting setting here is Sharing mode. By default, this defaults to All threads, meaning all running threads will use the same set of data. So in cases where you have two threads running, Thread1 will use the first line as input data, while Thread2 will use the second line. If the number of running threads exceeds the input data then entries will be reused from the top of the file, provided that Recycle on EOF is set to True (the default). The other options for sharing modes include Current thread group and Current thread. Use the former for cases where the dataset is specific for a certain thread group and the latter for cases where the dataset is specific to each thread. The other properties of the component are self-explanatory and additional information can be found in JMeter's online user guide. Now that the component is added, we need to parameterize the login HTTP request with the variable names defined in our file (or the csvconfig component) so that the values can be dynamically bound during test execution. We do this by changing the value of the username to ${user} and password to ${password}, respectively, on the HTTP login request. The values between the ${} match the headers defined in the input file or the values specified in the Variable Names entry of the CSV Data Set Config component. Binding parameter values for HTTP requests We can now run our test plan and it should work as earlier, only this time the values are dynamically bound through the configuration we have set up. So far, we have run for a single user. Let's increase the thread group properties and run for ten users, with a ramp-up of 30 seconds, for one iteration. Now let's rerun our test. Examining the test results, we notice some requests failed with a status code of 403 (http://en.wikipedia.org/wiki/HTTP_403), which is an access denied error. This is because we are trying to access an account that does not seem to be the logged-in user. In our sample, all users made a request for account number 4, which only one user (user1) is allowed to see. You can trace this by adding a View Tree listener to the test plan and returning the test. If you closely examine some of the HTTP requests in the Request tab of the View Results Treelistener, you'll notice requests as follows: /private/bank/account/ACC1/operations.html /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json … Observant readers would have noticed that our input data file also contains an account_id column. We can leverage this column so that we can parameterize all requests containing account numbers to pick the right accounts for each logged-in user. To do this, consider the following line of code: /private/bank/account/ACC1/operations.html Change this to the following line of code: /private/bank/account/ACC${account_id}/operations.html Now, consider the following line of code: /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json Change this to the following line of code: /private/bank/account/ACC${account_id}/year/2013/month/1/page/0/operations.json Make similar changes to the rest of the code. Go ahead and do this for all such requests. Once completed, we can now rerun our test plan and, this time, things are logically correct and will work fine. You can also verify that if all works as expected after the test execution by examining the View Results Tree listener, clicking on some account requests URL, and changing the response display from text to HTML, you should see an account other than ACCT1. Summary We have covered quite a lot in this article. You learned how to configure JMeter and our browsers to help record test plans. In addition, you learned about some built-in components that can help us feed data into our test plan and/or extract data from server responses. Resources for Article:   Further resources on this subject: Execution of Test Plans [article] Performance Testing Fundamentals [article] Data Acquisition and Mapping [article]
Read more
  • 0
  • 0
  • 1972

article-image-managing-files-folders-and-registry-items-using-powershell
Packt
23 Apr 2015
27 min read
Save for later

Managing Files, Folders, and Registry Items Using PowerShell

Packt
23 Apr 2015
27 min read
This article is written by Brenton J.W. Blawat, the author of Mastering Windows PowerShell Scripting. When you are automating tasks on servers and workstations, you will frequently run into situations where you need to manage files, folders, and registry items. PowerShell provides a wide variety of cmdlets that enable you to create, view, modify, and delete items on a system. (For more resources related to this topic, see here.) In this article, you will learn many techniques to interact with files, folders, and registry items. These techniques and items include: Registry provider Creating files, folders, registry keys, and registry named values Adding named values to registry keys Verifying the existence of item files, folders, and registry keys Renaming files, folders, registry keys, and named values Copying and moving files and folders Deleting files, folders, registry keys, and named values To properly follow the examples in this article, you will need to sequentially execute the examples. Each example builds on the previous examples, and some of these examples may not function properly if you do not execute the previous steps. Registry provider When you're working with the registry, PowerShell interprets the registry in the same way it does files and folders. In fact, the cmdlets that you use for files and folders are the same that you would use for registry items. The only difference with the registry is the way in which you call the registry path locations. When you want to reference the registry in PowerShell, you use the [RegistryLocation]:Path syntax. This is made available through the PowerShell Windows Registry Provider. While referencing [RegistryLocation]:Path, PowerShell provides you with the ability to use registry abbreviations pertaining to registry path locations. Instead of referencing the full path of HKEY_LOCAL_MACHINE, you can use the abbreviation of HKLM. Some other abbreviations include: HKLM: Abbreviation for HKEY_LOCAL_MACHINE hive HKCU: Abbreviation for HKEY_CURRENT_USER hive HKU: Abbreviation for HKEY_USERS hive HKCR: Abbreviation for HKEY_CLASSES_ROOT hive HKCC: Abbreviation for HKEY_CURRENT_CONFIG hive For example, if you wanted to reference the named values in the Run registry key for programs that start up on boot, the command line syntax would look like this: HKLM:SoftwareMicrosoftWindowsCurrentVersionRun While it is recommended that you don't use cmdlet aliases in your scripts, it is recommended, and a common practice, to use registry abbreviations in your code. This not only reduces the amount of effort to create the scripts but also makes it easier for others to read the registry locations. Creating files, folders, and registry items with PowerShell When you want to create a new file, folder, or registry key, you will need to leverage the new-item cmdlet. The syntax of this command is new-item, calling the –path argument to specify the location, calling the -name argument to provide a name for the item, and the -ItemType argument to designate whether you want a file or a directory (folder). When you are creating a file, it has an additional argument of –value, which allows you to prepopulate data into the file after creation. When you are creating a new registry key in PowerShell, you can omit the –ItemType argument as it is not needed for registry key creation. PowerShell assumes that when you are interacting with the registry using new-item, you are creating registry keys. The new-item command accepts the -force argument in the instance that the file, folder, or key is being created in a space that is restricted by User Account Control (UAC). To create a new folder and registry item, do the following action: New-item –path "c:Program Files" -name MyCustomSoftware –ItemType Directory New-item –path HKCU:SoftwareMyCustomSoftware -force The output of this is shown in the following screenshot: The preceding example shows how you can create folders and registry keys for a custom application. You first create a new folder in c:Program Files named MyCustomSoftware. You then create a new registry key in HKEY_CURRENT_USER:Software named MyCustomSoftware. You start by issuing the new-item cmdlet followed by the –path argument to designate that the new folder should be placed in c:Program Files. You then call the –name argument to specify the name of MyCustomSoftware. Finally, you tell the cmdlet that the -ItemType argument is Directory. After executing this command you will see a new folder in c:Progam Files named MyCustomSoftware. You then create the new registry key by calling the new-item cmdlet and issuing the –path argument and then specifying the HKCU:SoftwareMyCustomSoftware key location, and you complete it with the –force argument to force the creation of the key. After executing this command, you will see a new registry key in HKEY_CURRENT_USER:Software named MyCustomSoftware. One of the main benefits of PowerShell breaking apart the -path, -name, and -values arguments is that you have the ability to customize each of the values before you use them with the new-item cmdlet. For example, if you want to name a log file with the date stamp, add that parameter into a string and set the –name value to a string. To create a log file with a date included in the filename, do the following action: $logpath = "c:Program FilesMyCustomSoftwareLogs" New-item –path $logpath –ItemType Directory | out-null $itemname = (get-date –format "yyyyMMddmmss") + "MyLogFile.txt" $itemvalue = "Starting Logging at: " + " " + (get-date) New-item –path $logpath -name $itemname –ItemType File –value $itemvalue $logfile = $logpath + $itemname $logfile The output of this is shown in the following screenshot: The content of the log file is shown in the following screenshot: The preceding example displays how you can properly create a new log file with a date time path included in the log file name. It also shows how to create a new directory for the logs. It then displays how to include text inside the log file, designating the start of a new log file. Finally, this example displays how you can save the log file name and path in a variable to use later in your scripts. You first start by declaring the path of c:Program FilesMyCustomSoftwareLogs in the $logpath variable. You then use the new-item cmdlet to create a new folder in c:Program FilesMyCustomSoftware named Logs. By piping the command to out-null, the default output of the directory creation is silenced. You then declare the name that you want the file to be by using the get-date cmdlet, with the –format argument set to yyyyMMddmmss, and by adding mylogfile.txt. This will generate a date time stamp in the format of 4 digits including year, month, day, minutes, seconds, and mylogfile.txt. You then set the name of the file to the $itemname variable. Finally, you declare the $itemvalue variable which contains Starting Log at: and the standard PowerShell date time information. After the variables are populated, you issue the new-item command, the –path argument referencing the $logpath variable, the –name argument referencing the $itemname variable, the –ItemType referencing File, and the –value argument referencing the $itemvalue variable. At the end of the script, you will take the $logpath and $itemname variables to create a new variable of $logfile, which contains the location of the log file. As you will see from this example, after you execute the script the log file is populated with the value of Starting Logging at: 03/16/2015 14:38:24. Adding named values to registry keys When you are interacting with the registry, you typically view and edit named values or properties that are contained with in the keys and sub-keys. PowerShell uses several cmdlets to interact with named values. The first is the get-itemproperty cmdlet which allows you to retrieve the properties of a named value. The proper syntax for this cmdlet is to specify get-itemproperty to use the –path argument to specify the location in the registry, and to use the –name argument to specify the named value. The second cmdlet is new-itemproperty, which allows you to create new named values. The proper syntax for this cmdlet is specifying new-itemproperty, followed by the –path argument and the location where you want to create the new named value. You then specify the –name argument and the name you want to call the named value with. Finally, you use the –PropertyType argument which allows you to specify what kind of registry named value you want to create. The PropertyType argument can be set to Binary, DWord, ExpandString, MultiString, String, and Qword, depending on what your need for the registry value is. Finally, you specifythe –value argument which enables you to place a value into that named value. You may also use the –force overload to force the creation of the key in the instance that the key may be restricted by UAC. To create a named value in the registry, do the following action: $regpath = "HKCU:SoftwareMyCustomSoftware" $regname = "BuildTime" $regvalue = "Build Started At: " + " " + (get-date) New-itemproperty –path $regpath –name $regname –PropertyType String –value $regvalue $verifyValue = Get-itemproperty –path $regpath –name $regname Write-Host "The $regName named value is set to: " $verifyValue.$regname The output of this is shown in the following screenshot:   After executing the script, the registry will look like the following screenshot:   This script displays how you can create a registry named value in a specific location. It also displays how you can retrieve a value and display it in the console. You first start by defining several variables. The first variable $regpath defines where you want to create the new named value which is in the HKCU:SoftwareMyCustomSoftware registry key. The second variable $regname defines what you want the new named value to be named, which is BuildTime. The third variable defines what you want the value of the named value to be, which is Build Started At: with the current date and time. The next step in the script is to create the new value. You first call the new-itemproperty cmdlet with the –path argument and specify the $regpath variable. You then use the –name argument and specify $regname. This is followed by specifying the –PropertyType argument and by specifying the string PropertyType. Finally, you specify the –value argument and use the $regvalue variable to fill the named value with data. Proceeding forward in the script, you verify that the named value has proper data by leveraging the get-itemproperty cmdlet. You first define the $verifyvalue variable that captures the data from the cmdlet. You then issue get-itemproperty with the –path argument of $regpath and the –name argument of $regname. You then write to the console that the $regname named value is set to $verifyvalue.$regname. When you are done with script execution, you should have a new registry named value of BuildTime in the HKEY_CURRENT_USER:SoftwareMyCustomSoftware key with a value similar to Build Started At: 03/16/2015 14:49:22. Verifying files, folders, and registry items When you are creating and modifying objects, it's important to make sure that the file, folder, and registry items don't exist prior to creating and modifying them. The test-path cmdlet allows you to test to see if a file, folder, or registry item exists prior to working with it. The proper syntax for this is first calling test-path and then specifying a file, folder, or registry location. The result of the test-path command is True if the object exists or False if the object doesn't exist. To verify if files, folders, and registry entries exist, do the following action: $testfolder = test-path "c:Program FilesMyCustomSoftwareLogs" #Update The Following Line with the Date/Timestamp of your file $testfile = test-path "c:Program FilesMyCustomSoftwareLogs201503163824MyLogFile.txt" $testreg = test-path "HKCU:SoftwareMyCustomSoftware" If ($testfolder) { write-host "Folder Found!" } If ($testfile) { write-host "File Found!" } If ($testreg) { write-host "Registry Key Found!" } The output is shown in the following screenshot: This example displays how to verify if a file, folder, and registry item exists. You first start by declaring a variable to catch the output from the test-path cmdlet. You then specify test-path, followed by a file, folder, or registry item whose existence you want to verify. In this example, you start by using the test-path cmdlet to verify if the Logs folder is located in the c:Program FilesMyCustomSoftware directory. You then store the result in the $testfolder variable. You then use the test-path cmdlet to check if the file located at c:Program FilesMyCustomSoftwareLogs201503163824MyLogFile.txt exists. You then store the result in the $testfile variable. Finally, you use the test-path cmdlet to see if the registry key of HKCU:SoftwareMyCustomSoftware exists. You then store the result in the $testreg variable. To evaluate the variables, you create IF statements to check whether the variables are True and write to the console if the items are found. After executing the script, the console will output the messages Folder Found!, File Found!, and Registry Key Found!. Copying and moving files and folders When you are working in the operating system, there may be instances where you need to copy or move files and folders around on the operating system. PowerShell provides two cmdlets to copy and move files. The copy-item cmdlet allows you to copy a file or a folder from one location to another. The proper syntax of this cmdlet is calling copy-item, followed by –path argument for the source you want to copy and the –destination argument for the destination of the file or folder. The copy-item cmdlet also has the –force argument to write over a read-only or hidden file. There are instances when read-only files cannot be overwritten, such as a lack of user permissions, which will require additional code to change the file attributes before copying over files or folders. The copy-item cmdlet also has a –recurse argument, which allows you to recursively copy the files in a folder and its subdirectories. A common trick to use with the copy-item cmdlet is to rename during the copy operation. To do this, you change the destination to the desired name you want the file or folder to be. After executing the command, the file or folder will have a new name in its destination. This reduces the number of steps required to copy and rename a file or folder. The move-item cmdlet allows you to move files from one location to another. The move-item cmdlet has the same syntax as the copy-item cmdlet. The proper syntax of this cmdlet is calling move-item, followed by the –path argument for the source you want to move and the –destination argument for the destination of the file or folder. The move-item cmdlet also has the –force overload to write over a read-only or hidden file. There are also instances when read-only files cannot be overwritten, such as a lack of user permissions, which will require additional code to change the file attributes before moving files or folders. The move-item cmdlet does not, however, have a -recurse argument. Also, it's important to remember that the move-item cmdlet requires the destination to be created prior to the move. If the destination folder is not available, it will throw an exception. It's recommended to use the test-path cmdlet in conjunction with the move-item cmdlet to verify that the destination exists prior to the move operation. PowerShell has the same file and folder limitations as the core operating system it is being run on. This means that file paths that are longer than 256 characters in length will receive an error message during the copy process. For paths that are over 256 characters in length, you need to leverage robocopy.exe or a similar file copy program to copy or move files. All move-item operations are recursive by default. You do not have to specify the –recurse argument to recursively move files. To copy files recursively, you need to specify the –recurse argument. To copy and move files and folders, do the following action: New-item –path "c:Program FilesMyCustomSoftwareAppTesting" –ItemType Directory | Out-null New-item –path "c:Program FilesMyCustomSoftwareAppTestingHelp" -ItemType Directory | Out-null New-item –path "c:Program FilesMyCustomSoftwareAppTesting" –name AppTest.txt –ItemType File | out-null New-item –path "c:Program FilesMyCustomSoftwareAppTestingHelp" –name HelpInformation.txt –ItemType File | out-null New-item –path "c:Program FilesMyCustomSoftware" -name ConfigFile.txt –ItemType File | out-null move-item –path "c:Program FilesMyCustomSoftwareAppTesting" –destination "c:Program FilesMyCustomSoftwareArchive" –force copy-item –path "c:Program FilesMyCustomSoftwareConfigFile.txt" "c:Program FilesMyCustomSoftwareArchiveArchived_ConfigFile.txt" –force The output of this is shown in the following screenshot: This example displays how to properly use the copy-item and move-item cmdlets. You first start by using the new-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareAppTesting and the –ItemType argument set to Directory. You then pipe the command to out-null to suppress the default output. This creates the AppTesting sub directory in the c:Program FilesMyCustomSoftware directory. You then create a second folder using the new-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareAppTestingHelp and the –ItemType argument set to Directory. You then pipe the command to out-null. This creates the Help sub directory in the c:Program FilesMyCustomSoftwareAppTesting directory. After creating the directories, you create a new file using the new-item cmdlet with the path of c:Program FilesMyCustomSoftwareAppTesting, the -name argument set to AppTest.txt, the -ItemType argument set to File; you then pipe it to out-null. You create a second file by using the new-item cmdlet with the path of c:Program FilesMyCustomSoftwareAppTestingHelp, the -name argument set to HelpInformation.txt and the -ItemType argument set to File, and then piping it to out-null. Finally, you create a third file using the new-item cmdlet with the path of c:Program FilesMyCustomSoftware, the -name argument set to ConfigFile.txt and the -ItemType argument set to File, and then pipe it to out-null. After creating the files, you are ready to start copying and moving files. You first move the AppTesting directory to the Archive directory by using the move-item cmdlet and then specifying the –path argument with the value of c:Program FilesMyCustomSoftwareAppTesting as the source, the –destination argument with the value of c:Program FilesMyCustomSoftwareArchive as the destination, and the –force argument to force the move if the directory is hidden. You then copy a configuration file by using the copy-item cmdlet, using the –path argument with c:Program FilesMyCustomSoftwareConfigFile.txt as the source, and then specifying the –destination argument with c:Program FilesMyCustomSoftwareArchiveArchived_ConfigFile.txt as the new destination with a new filename;, you then leverage the –force argument to force the copy if the file is hidden. This follow screenshot displays the file folder hierarchy after executing this script: After executing this script, the file folder hierarchy should be as displayed in the preceding screenshot. This also displays that when you move the AppTesting directory to the Archive folder, it automatically performs the move recursively, keeping the file and folder structure intact. Renaming files, folders, registry keys, and named values When you are working with PowerShell scripts, you may have instances where you need to rename files, folders, and registry keys. The rename-item cmdlet can be used to perform renaming operations on a system. The syntax for this cmdlet is rename-item and specifying the –path argument with path to the original object, and then you call the –newname argument with a full path to what you want the item to be renamed to. The rename-item has a –force argument to force the rename in instances where the file or folder is hidden or restricted by UAC or to avoid prompting for the rename action. To copy and rename files and folders, do the following action: New-item –path "c:Program FilesMyCustomSoftwareOldConfigFiles" –ItemType Directory | out-null Rename-item –path "c:Program FilesMyCustomSoftwareOldConfigFiles" –newname "c:Program FilesMyCustomSoftwareConfigArchive" -force copy-item –path "c:Program FilesMyCustomSoftwareConfigFile.txt" "c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt" –force Rename-item –path "c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt" –newname "c:Program FilesMyCustomSoftwareConfigArchiveOld_ConfigFile.txt" –force The output of this is shown in the following screenshot: In this example, you create a script that creates a new folder and a new file, and then renames the file and the folder. To start, you leverage the new-item cmdlet which creates a new folder in c:Program FilesMyCustomSoftware named OldConfigFiles. You then pipe that command to Out-Null, which silences the standard console output of the folder creation. You proceed to rename the folder c:Program FilesMyCustomSoftwareOldConfigFiles with the rename-item cmdlet using the –newname argument to c:Program FilesMyCustomSoftwareConfigArchive. You follow the command with the –force argument to force the renaming of the folder. You leverage the copy-item cmdlet to copy the ConfigFile.txt into the ConfigArchive directory. You first start by specifying the copy-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareConfigFile.txt and the destination set to c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt. You include the –Force argument to force the copy. After moving the file, leverage the rename-item cmdlet with the -path argument to rename c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt using the –newname argument to c:Program FilesMyCustomSoftwareConfigArchiveOld_ConfigFile.txt. You follow this command with the –force argument to force the renaming of the file. At the end of this script, you will have successfully renamed a folder, moved a file into that renamed folder, and renamed a file in the newly created folder. In the instance that you want to rename a registry key, do the following action: New-item –path "HKCU:SoftwareMyCustomSoftware" –name CInfo –force | out-null Rename-item –path "HKCU:SoftwareMyCustomSoftwareCInfo" –newname ConnectionInformation –force The output of this is shown in the following screenshot: After renaming the subkey, the registry will look like the following screenshot: This example displays how to create a new subkey and rename it. You first start by using the new-item cmdlet to create a new sub key with the –path argument of the HKCU:SoftwareMyCustomSoftware key and the -name argument set to CInfo. You then pipe that line to out-null in order to suppress the standard output from the script. You proceed to execute the rename-item cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareCInfo and the –newname argument set to ConnectionInformation. You then use the –force argument to force the renaming in instances when the subkey is restricted by UAC. After executing this command, you will see that the CInfo subkey located in HKCU:SoftwareMyCustomSoftware is now renamed to ConnectionInformation. When you want to update named values in the registry, you will not be able to use the rename-item cmdlet. This is due to the named values being properties of the keys themselves. Instead, PowerShell provides the rename-itemproperty cmdlet to rename the named values in the key. The proper syntax for this cmdlet is calling rename-itemproperty by using the –path argument, followed by the path to the key that contains the named value. You then issue the –name argument to specify the named value you want to rename. Finally, you specify the –newname argument and the name you want the named value to be renamed to. To rename the registry named value, do the following action: $regpath = "HKCU:SoftwareMyCustomSoftwareConnectionInformation" $regname = "DBServer" $regvalue = "mySQLserver.mydomain.local" New-itemproperty –path $regpath –name $regname –PropertyType String –value $regvalue | Out-null Rename-itemproperty –path $regpath –name DBServer –newname DatabaseServer The output of this is shown in the following screenshot: After updating the named value, the registry will reflect this change, and so should look like the following screenshot: The preceding script displays how to create a new named value and rename it to a different named value. You first start by defining the variables to be used with the new-itemproperty cmdlet. You define the location of the registry subkey in the $regpath variable and set it to HKCU:SoftwareMyCustomSoftwareConnectionInformation. You then specify the named value name of DBServer and store it in the $regname variable. Finally, you define the $regvalue variable and store the value of mySQLserver.mydomain.local. To create the new named value, leverage new-itemproperty, specify the –path argument with the $regpath variable, use the –name argument with the $regname variable, and use the –value argument with the $regvalue variable. You then pipe this command to out-null in order to suppress the default output of the command. This command will create the new named value of DBServer with the value of mySQLserver.mydomain.local in the HKCU:SoftwareMyCustomSoftwareConnectionInformation subkey. The last step in the script is renaming the DBServer named value to DatabaseServer. You first start by calling the rename-itemproperty cmdlet and then using the –path argument and specifying the $regpath variable which contains the HKCU:SoftwareMyCustomSoftwareConnectionInformation subkey; you then proceed by calling the –name argument and specifying DBServer and finally calling the –newname argument with the new value of DatabaseServer. After executing this command, you will see that the HKCU:SoftwareMyCustomSoftwareConnectionInformation key has a new named value of DatabaseServer containing the same value of mySQLserver.mydomain.local. Deleting files, folders, registry keys, and named values When you are creating scripts, there are instances when you need to delete items from a computer. PowerShell has the remove-item cmdlet that enables the removal of objects from a computer. The syntax of this cmdlet starts by calling the remove-item cmdlet and proceeds with specifying the –path argument with a file, folder, or registry key to delete. The remove-item cmdlet has several useful arguments that can be leveraged. The –force argument is available to delete files, folders, and registry keys that are read-only, hidden, or restricted by UAC. The –recurse argument is available to enable recursive deletion of files, folders, and registry keys on a system. The –include argument enables you to delete specific files, folders, and registry keys. The –include argument allows you to use the wildcard character of an asterisk (*) to search for specific values in an object name or a specific object type. The –exclude argument will exclude specific files, folders, and registry keys on a system. It also accepts the wildcard character of an asterisk (*) to search for specific values in an object name or a specific object type. The named values in the registry are properties of the key that they are contained in. As a result, you cannot use the remove-item cmdlet to remove them. Instead, PowerShell offers the remove-itemproperty cmdlet to enable the removal of the named values. The remove-itemproperty cmdlet has arguments similar to those of the remove-item cmdlet. It is important to note, however, that the –filter, -include, and –exclude arguments will not work with named values in the registry. They only work with item paths such as registry keys. To set up the system for the deletion example, you need to process the following script: # Create New Directory new-item –path "c:program filesMyCustomSoftwareGraphics" –ItemType Directory | Out-null # Create Files for This Example new-item –path "c:program filesMyCustomSoftwareGraphics" –name FirstGraphic.bmp –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name FirstGraphic.png –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name SecondGraphic.bmp –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name SecondGraphic.png –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201301010101LogFile.txt –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201302010101LogFile.txt –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201303010101LogFile.txt –ItemType File | Out-Null # Create New Registry Keys and Named Values New-item –path "HKCU:SoftwareMyCustomSoftwareAppSettings" | Out-null New-item –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" | Out-null New-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –name AlwaysOn –PropertyType String –value True | Out-null New-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –name AutoDeleteLogs –PropertyType String –value True | Out-null The output of this is shown in the following screenshot: The preceding example is designed to set up the file structure for the following example. You first use the new-item cmdlet to create a new directory called Graphics in c:program filesMyCustomSoftware. You then use the new-item cmdlet to create new files named FirstGraphic.bmp, FirstGraphic.png, SecondGraphic.bmp, and SecondGraphic.png in the c:Program FilesMyCustomSoftwareGraphics directory. You then use the new-item cmdlet to create new log files in c:Program FilesMyCustomSoftwareLogs named 201301010101LogFile.txt, 201302010101LogFile.txt, and 201303010101LogFile.txt. After creating the files, you create two new registry keys located at HKCU:SoftwareMyCustomSoftwareAppSettings and HKCU:SoftwareMyCustomSoftwareApplicationSettings. You then populate the HKCU:SoftwareMyCustomSoftwareApplicationSettings key with a named value of AlwaysOn set to True and a named value of AutoDeleteLogs set to True. To remove files, folders, and registry items from a system, do the following action: # Get Current year $currentyear = get-date –f yyyy # Build the Exclude String $exclude = "*" + $currentyear + "*" # Remove Items from System Remove-item –path "c:Program FilesMyCustomSoftwareGraphics" –include *.bmp –force -recurse Remove-item –path "c:Program FilesMyCustomSoftwareLogs" –exclude $exclude -force –recurse Remove-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –Name AutoDeleteLogs Remove-item –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" The output of this is shown in the following screenshot: This script displays how you can leverage PowerShell to clean up files and folders with the remove-item cmdlet and the –exclude and –include arguments. You first start by building the exclusion string for the remove-item cmdlet. You retrieve the current year by using the get-date cmdlet with the –f parameter set to yyyy. You save the output into the $currentyear variable. You then create a $exclude variable that appends asterisks on each end of the $currentyear variable, which contains the current date. This will allow the exclusion filter to find the year anywhere in the file or folder names. The first command is that you use the remove-item cmdlet and call the –path argument with the path of c:Program FilesMyCustomSoftwareGraphics. You then specify the –include argument with the value of *.bmp. This tells the remove-item cmdlet to delete all files that end in .bmp. You then specify –force to force the deletion of the files and –recurse to search the entire Graphics directory to delete the files that meet the *.bmp inclusion criteria but leaves the other files you created with the *.png extension. The second command leverages the remove-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareLogs. You use the –exclude argument with the value of $exclude to exclude files that contain the current year. You then specify –force to force the deletion of the files and –recurse to search the entire logs directory to delete the files and folders that do not meet the exclusion criteria. The third command leverages the remove-itemproperty cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareApplicationSettings and the –name argument set to AutoDeleteLogs. After execution, the AutoDeleteLogs named path is deleted from the registry. The last command leverages the remote-item cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareApplicationSettings. After running this last command, the entire subkey of ApplicationSettings is removed from HKCU:SoftwareMyCustomSoftware. After executing this script, you will see that the script deletes the .BMP files in the c:Program FilesMyCustomSoftwareGraphics directory, but it leaves the .PNG files. You will also see that the script deletes all of the log files except the ones that had the current year contained in them. Last, you will see that the ApplicationSettings sub key that was created in the previous step is successfully deleted from HKCU:SoftwareMyCustomSoftware. When you use the remove-item and -recurse parameters together, it is important to note that if remote-item cmdlet deletes all the files and folders in a directory, the –recurse parameter will also delete the empty folder and subfolders that contained those files. This is only true when there are no remaining files in the folders in a particular directory. This may create undesirable results on your system, and so you should use caution while performing this combination. Summary This article thoroughly explained the interaction of PowerShell with the files, folders, and registry objects. It began by displaying how to create a folder and a registry key by leveraging the new-item cmdlet. It also displayed the additional arguments that can be used with the new-item cmdlet to create a log file with the date time integrated in the filename. The article proceeded to display how to create and view a registry key property using the get-itemproperty and new-itemproperty cmdlets This article then moved to verification of files, folder, and registry items through the test-path cmdlet. By using this cmdlet, you can test to see if the object exists prior to interacting with it. You also learned how to interact with copying and moving files and folders by leveraging the copy-item and move-item cmdlets. You also learned how to rename files, folders, registry keys and registry properties with the use of the rename-item and rename-itemproperty cmdlets. This article ends with learning how to delete files, folders, and registry items by leveraging the remove-item and remove-itemproperty cmdlets. Resources for Article: Further resources on this subject: Azure Storage [article] Hyper-V Basics [article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 22055
article-image-light-speed-unit-testing
Packt
23 Apr 2015
6 min read
Save for later

Light Speed Unit Testing

Packt
23 Apr 2015
6 min read
In this article by Paulo Ragonha, author of the book Jasmine JavaScript Testing - Second Edition, we will learn Jasmine stubs and Jasmine Ajax plugin. (For more resources related to this topic, see here.) Jasmine stubs We use stubs whenever we want to force a specific path in our specs or replace a real implementation for a simpler one. Let's take the example of the acceptance criteria, "Stock when fetched, should update its share price", by writing it using Jasmine stubs. The stock's fetch function is implemented using the $.getJSON function, as follows: Stock.prototype.fetch = function(parameters) { $.getJSON(url, function (data) {    that.sharePrice = data.sharePrice;    success(that); }); }; We could use the spyOn function to set up a spy on the getJSON function with the following code: describe("when fetched", function() { beforeEach(function() {    spyOn($, 'getJSON').and.callFake(function(url, callback) {      callback({ sharePrice: 20.18 });    });    stock.fetch(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); We will use the and.callFake function to set a behavior to our spy (by default, a spy does nothing and returns undefined). We make the spy invoke its callback parameter with an object response ({ sharePrice: 20.18 }). Later, at the expectation, we use the toEqual assertion to verify that the stock's sharePrice has changed. To run this spec, you no longer need a server to make the requests to, which is a good thing, but there is one issue with this approach. If the fetch function gets refactored to use $.ajax instead of $.getJSON, then the test will fail. A better solution, provided by a Jasmine plugin called jasmine-ajax, is to stub the browser's AJAX infrastructure instead, so the implementation of the AJAX request is free to be done in different manners. Jasmine Ajax Jasmine Ajax is an official plugin developed to help out the testing of AJAX requests. It changes the browser's AJAX request infrastructure to a fake implementation. This fake (or mocked) implementation, although simpler, still behaves like the real implementation to any code using its API. Installing the plugin Before we dig into the spec implementation, first we need to add the plugin to the project. Go to https://github.com/jasmine/jasmine-ajax/ and download the current release (which should be compatible with the Jasmine 2.x release). Place it inside the lib folder. It is also needed to be added to the SpecRunner.html file, so go ahead and add another script: <script type="text/javascript" src="lib/mock-ajax.js"></script> A fake XMLHttpRequest Whenever you are using jQuery to make AJAX requests, under the hood it is actually using the XMLHttpRequest object to perform the request. XMLHttpRequest is the standard JavaScript HTTP API. Even though its name suggests that it uses XML, it supports other types of content such as JSON; the name has remained the same for compatibility reasons. So, instead of stubbing jQuery, we could change the XMLHttpRequest object with a fake implementation. That is exactly what this plugin does. Let's rewrite the previous spec to use this fake implementation: describe("when fetched", function() { beforeEach(function() {    jasmine.Ajax.install(); });   beforeEach(function() {    stock.fetch();      jasmine.Ajax.requests.mostRecent().respondWith({      'status': 200,      'contentType': 'application/json',      'responseText': '{ "sharePrice": 20.18 }'    }); });   afterEach(function() {    jasmine.Ajax.uninstall(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); Drilling the implementation down: First, we tell the plugin to replace the original implementation of the XMLHttpRequest object by a fake implementation using the jasmine.Ajax.install function. We then invoke the stock.fetch function, which will invoke $.getJSON, creating XMLHttpRequest anew under the hood. And finally, we use the jasmine.Ajax.requests.mostRecent().respondWith function to get the most recently made request and respond to it with a fake response. We use the respondWith function, which accepts an object with three properties: The status property to define the HTTP status code. The contentType (JSON in the example) property. The responseText property, which is a text string containing the response body for the request. Then, it's all a matter of running the expectations: it("should update its share price", function() { expect(stock.sharePrice).toEqual(20.18); }); Since the plugin changes the global XMLHttpRequest object, you must remember to tell Jasmine to restore it to its original implementation after the test runs; otherwise, you could interfere with the code from other specs (such as the Jasmine jQuery fixtures module). Here's how you can accomplish this: afterEach(function() { jasmine.Ajax.uninstall(); }); There is also a slightly different approach to write this spec; here, the request is first stubbed (with the response details) and the code to be exercised is executed later. The previous example is changed to the following: beforeEach(function() {jasmine.Ajax.stubRequest('http://localhost:8000/stocks/AOUE').andReturn({    'status': 200,    'contentType': 'application/json',    'responseText': '{ "sharePrice": 20.18 }' });   stock.fetch(); }); It is possible to use the jasmine.Ajax.stubRequest function to stub any request to a specific request. In the example, it is defined by the URL http://localhost:8000/stocks/AOUE, and the response definition is as follows: { 'status': 200, 'contentType': 'application/json', 'responseText': '{ "sharePrice": 20.18 }' } The response definition follows the same properties as the previously used respondWith function. Summary In this article, you learned how asynchronous tests can hurt the quick feedback loop you can get with unit testing. I showed how you can use either stubs or fakes to make your specs run quicker and with fewer dependencies. We have seen two different ways in which you could test AJAX requests with a simple Jasmine stub and with the more advanced, fake implementation of the XMLHttpRequest. You also got more familiar with spies and stubs and should be more comfortable using them in different scenarios. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Working with Blender [article] Category Theory [article]
Read more
  • 0
  • 0
  • 1782

article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 11065

article-image-solr-indexing-internals
Packt
23 Apr 2015
9 min read
Save for later

Solr Indexing Internals

Packt
23 Apr 2015
9 min read
In this article by Jayant Kumar, author of the book Apache Solr Search Patterns, we will discuss use cases for Solr in e-commerce and job sites. We will look at the problems faced while providing search in an e-commerce or job site: The e-commerce problem statement The job site problem statement Challenges of large-scale indexing (For more resources related to this topic, see here.) The e-commerce problem statement E-commerce provides an easy way to sell products to a large customer base. However, there is a lot of competition among multiple e-commerce sites. When users land on an e-commerce site, they expect to find what they are looking for quickly and easily. Also, users are not sure about the brands or the actual products they want to purchase. They have a very broad idea about what they want to buy. Many customers nowadays search for their products on Google rather than visiting specific e-commerce sites. They believe that Google will take them to the e-commerce sites that have their product. The purpose of any e-commerce website is to help customers narrow down their broad ideas and enable them to finalize the products they want to purchase. For example, suppose a customer is interested in purchasing a mobile. His or her search for a mobile should list mobile brands, operating systems on mobiles, screen size of mobiles, and all other features as facets. As the customer selects more and more features or options from the facets provided, the search narrows down to a small list of mobiles that suit his or her choice. If the list is small enough and the customer likes one of the mobiles listed, he or she will make the purchase. The challenge is also that each category will have a different set of facets to be displayed. For example, searching for books should display their format, as in paperpack or hardcover, author name, book series, language, and other facets related to books. These facets were different for mobiles that we discussed earlier. Similarly, each category will have different facets and it needs to be designed properly so that customers can narrow down to their preferred products, irrespective of the category they are looking into. The takeaway from this is that categorization and feature listing of products should be taken care of. Misrepresentation of features can lead to incorrect search results. Another takeaway is that we need to provide multiple facets in the search results. For example, while displaying the list of all mobiles, we need to provide facets for a brand. Once a brand is selected, another set of facets for operating systems, network, and mobile phone features has to be provided. As more and more facets are selected, we still need to show facets within the remaining products. Example of facet selection on Amazon.com Another problem is that we do not know what product the customer is searching for. A site that displays a huge list of products from different categories, such as electronics, mobiles, clothes, or books, needs to be able to identify what the customer is searching for. A customer can be searching for samsung, which can be in mobiles, tablets, electronics, or computers. The site should be able to identify whether the customer has input the author name or the book name. Identifying the input would help in increasing the relevance of the result set by increasing the precision of the search results. Most e-commerce sites provide search suggestions that include the category to help customers target the right category during their search. Amazon, for example, provides search suggestions that include both latest searched terms and products along with category-wise suggestions: Search suggestions on Amazon.com It is also important that products are added to the index as soon as they are available. It is even more important that they are removed from the index or marked as sold out as soon as their stock is exhausted. For this, modifications to the index should be immediately visible in the search. This is facilitated by a concept in Solr known as Near Real Time Indexing and Search (NRT). The job site problem statement A job site serves a dual purpose. On the one hand, it provides jobs to candidates, and on the other, it serves as a database of registered candidates' profiles for companies to shortlist. A job search has to be very intuitive for the candidates so that they can find jobs suiting their skills, position, industry, role, and location, or even by the company name. As it is important to keep the candidates engaged during their job search, it is important to provide facets on the abovementioned criteria so that they can narrow down to the job of their choice. The searches by candidates are not very elaborate. If the search is generic, the results need to have high precision. On the other hand, if the search does not return many results, then recall has to be high to keep the candidate engaged on the site. Providing a personalized job search to candidates on the basis of their profiles and past search history makes sense for the candidates. On the recruiter side, the search provided over the candidate database is required to have a huge set of fields to search upon every data point that the candidate has entered. The recruiters are very selective when it comes to searching for candidates for specific jobs. Educational qualification, industry, function, key skills, designation, location, and experience are some of the fields provided to the recruiter during a search. In such cases, the precision has to be high. The recruiter would like a certain candidate and may be interested in more candidates similar to the selected candidate. The more like this search in Solr can be used to provide a search for candidates similar to a selected candidate. NRT is important as the site should be able to provide a job or a candidate for a search as soon as any one of them is added to the database by either the recruiter or the candidate. The promptness of the site is an important factor in keeping users engaged on the site. Challenges of large-scale indexing Let us understand how indexing happens and what can be done to speed it up. We will also look at the challenges faced during the indexing of a large number of documents or bulky documents. An e-commerce site is a perfect example of a site containing a large number of products, while a job site is an example of a search where documents are bulky because of the content in candidate resumes. During indexing, Solr first analyzes the documents and converts them into tokens that are stored in the RAM buffer. When the RAM buffer is full, data is flushed into a segment on the disk. When the numbers of segments are more than that defined in the MergeFactor class of the Solr configuration, the segments are merged. Data is also written to disk when a commit is made in Solr. Let us discuss a few points to make Solr indexing fast and to handle a large index containing a huge number of documents. Using multiple threads for indexing on Solr We can divide our data into smaller chunks and each chunk can be indexed in a separate thread. Ideally, the number of threads should be twice the number of processor cores to avoid a lot of context switching. However, we can increase the number of threads beyond that and check for performance improvement. Using the Java binary format of data for indexing Instead of using XML files, we can use the Java bin format for indexing. This reduces a lot of overhead of parsing an XML file and converting it into a binary format that is usable. The way to use the Java bin format is to write our own program for creating fields, adding fields to documents, and finally adding documents to the index. Here is a sample code: //Create an instance of the Solr server String SOLR_URL = "http://localhost:8983/solr" SolrServer server = new HttpSolrServer(SOLR_URL);   //Create collection of documents to add to Solr server SolrInputDocument doc1 = new SolrInputDocument(); document.addField("id",1); document.addField("desc", "description text for doc 1");   SolrInputDocument doc2 = new SolrInputDocument(); document.addField("id",2); document.addField("desc", "description text for doc 2");   Collection<SolrInputDocument> docs = new ArrayList<SolrInputDocument>(); docs.add(doc1); docs.add(doc2);   //Add the collection of documents to the Solr server and commit. server.add(docs); server.commit(); Here is the reference to the API for the HttpSolrServer program http://lucene.apache.org/solr/4_6_0/solr-solrj/org/apache/solr/client/solrj/impl/HttpSolrServer.html. Add all files from the <solr_directory>/dist folder to the classpath for compiling and running the HttpSolrServer program. Using the ConcurrentUpdateSolrServer class for indexing Using the ConcurrentUpdateSolrServer class instead of the HttpSolrServer class can provide performance benefits as the former uses buffers to store processed documents before sending them to the Solr server. We can also specify the number of background threads to use to empty the buffers. The API docs for ConcurrentUpdateSolrServer are found in the following link: http://lucene.apache.org/solr/4_6_0/solr-solrj/org/apache/solr/client/solrj/impl/ConcurrentUpdateSolrServer.html The constructor for the ConcurrentUpdateSolrServer class is defined as: ConcurrentUpdateSolrServer(String solrServerUrl, int queueSize, int threadCount) Here, queueSize is the buffer and threadCount is the number of background threads used to flush the buffers to the index on disk. Note that using too many threads can increase the context switching between threads and reduce performance. In order to optimize the number of threads, we should monitor performance (docs indexed per minute) after each increase and ensure that there is no decrease in performance. Summary In this article, we saw in brief the problems faced by e-commerce and job websites during indexing and search. We discussed the challenges faced while indexing a large number of documents. We also saw some tips on improving the speed of indexing. Resources for Article: Further resources on this subject: Tuning Solr JVM and Container [article] Apache Solr PHP Integration [article] Boost Your search [article]
Read more
  • 0
  • 0
  • 2303
article-image-creating-cool-content
Packt
23 Apr 2015
26 min read
Save for later

Creating Cool Content

Packt
23 Apr 2015
26 min read
In this article by Alex Ogorek, author of the book Mastering Cocos2d Game Development you'll be learning how to implement the really complex, subtle game mechanics that not many developers do. This is what separates the good games from the great games. There will be many examples, tutorials, and code snippets in this article intended for adaption in your own projects, so feel free to come back at any time to look at something you may have either missed the first time, or are just curious to know about in general. In this article, we will cover the following topics: Adding a table for scores Adding subtle sliding to the units Creating movements on a Bézier curve instead of straight paths (For more resources related to this topic, see here.) Adding a table for scores Because "we want a way to show the user their past high scores, in the GameOver scene, we're going to add a table that displays the most recent high scores that are saved. For this, we're going to use CCTableView. It's still relatively new, but it works for what we're going to use it. CCTableView versus UITableView Although UITableView might be known to some of you who've made non-Cocos2d apps before, you "should be aware of its downfalls when it comes to using it within Cocos2d. For example, if you want a BMFont in your table, you can't add LabelBMFont (you could try to convert the BMFont into a TTF font and use that within the table, but that's outside the scope of this book). If you still wish to use a UITableView object (or any UIKit element for that matter), you can create the object like normal, and add it to the scene, like this (tblScores is the name of the UITableView object): [[[CCDirector sharedDirector] view] addSubview:tblScores]; Saving high scores (NSUserDefaults) Before "we display any high scores, we have to make sure we save them. "The easiest way to do this is by making use of Apple's built-in data preservation tool—NSUserDefaults. If you've never used it before, let me tell you that it's basically a dictionary with "save" mechanics that stores the values in the device so that the next time the user loads the device, the values are available for the app. Also, because there are three different values we're tracking for each gameplay, let's only say a given game is better than another game when the total score is greater. Therefore, let's create a saveHighScore method that will go through all the total scores in our saved list and see whether the current total score is greater than any of the saved scores. If so, it will insert itself and bump the rest down. In MainScene.m, add the following method: -(NSInteger)saveHighScore { //save top 20 scores //an array of Dictionaries... //keys in each dictionary: // [DictTotalScore] // [DictTurnsSurvived] // [DictUnitsKilled]   //read the array of high scores saved on the user's device NSMutableArray *arrScores = [[[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores] mutableCopy]; //sentinel value of -1 (in other words, if a high score was not found on this play through) NSInteger index = -1; //loop through the scores in the array for (NSDictionary *dictHighScore in arrScores) { //if the current game's total score is greater than the score stored in the current index of the array...    if (numTotalScore > [dictHighScore[DictTotalScore] integerValue])    { //then store that index and break out of the loop      index = [arrScores indexOfObject:dictHighScore];      break;    } } //if a new high score was found if (index > -1) { //create a dictionary to store the score, turns survived, and units killed    NSDictionary *newHighScore = @{ DictTotalScore : @(numTotalScore),    DictTurnsSurvived : @(numTurnSurvived),    DictUnitsKilled : @(numUnitsKilled) };    //then insert that dictionary into the array of high scores    [arrScores insertObject:newHighScore atIndex:index];    //remove the very last object in the high score list (in other words, limit the number of scores)    [arrScores removeLastObject];    //then save the array    [[NSUserDefaults standardUserDefaults] setObject:arrScores forKey:DataHighScores];    [[NSUserDefaults standardUserDefaults] synchronize]; }   //finally return the index of the high score (whether it's -1 or an actual value within the array) return index; } Finally, call "this method in the endGame method right before you transition to the next scene: -(void)endGame { //call the method here to save the high score, then grab the index of the high score within the array NSInteger hsIndex = [self saveHighScore]; NSDictionary *scoreData = @{ DictTotalScore : @(numTotalScore), DictTurnsSurvived : @(numTurnSurvived), DictUnitsKilled : @(numUnitsKilled), DictHighScoreIndex : @(hsIndex)}; [[CCDirector sharedDirector] replaceScene:[GameOverScene sceneWithScoreData:scoreData]]; } Now that we have our high scores being saved, let's create the table to display them. Creating the table It's "really simple to set up a CCTableView object. All we need to do is modify the contentSize object, and then put in a few methods that handle the size and content of each cell. So first, open the GameOverScene.h file and set the scene as a data source for the CCTableView: @interface GameOverScene : CCScene <CCTableViewDataSource> Then, in the initWithScoreData method, create the header labels as well as initialize the CCTableView: //get the high score array from the user's device arrScores = [[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores];    //create labels CCLabelBMFont *lblTableTotalScore = [CCLabelBMFont labelWithString:@"Total Score:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableUnitsKilled = [CCLabelBMFont labelWithString:@"Units Killed:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableTurnsSurvived = [CCLabelBMFont labelWithString:@"Turns Survived:" fntFile:@"bmFont.fnt"]; //position the labels lblTableTotalScore.position = ccp(winSize.width * 0.5, winSize.height * 0.85); lblTableUnitsKilled.position = ccp(winSize.width * 0.675, winSize.height * 0.85); lblTableTurnsSurvived.position = ccp(winSize.width * 0.875, winSize.height * 0.85); //add the labels to the scene [self addChild:lblTableTurnsSurvived]; [self addChild:lblTableTotalScore]; [self addChild:lblTableUnitsKilled]; //create the tableview and add it to the scene CCTableView * tblScores = [CCTableView node]; tblScores.contentSize = CGSizeMake(0.6, 0.4); CGFloat ratioX = (1.0 - tblScores.contentSize.width) * 0.75; CGFloat ratioY = (1.0 - tblScores.contentSize.height) / 2; tblScores.position = ccp(winSize.width * ratioX, winSize.height * ratioY); tblScores.dataSource = self; tblScores.block = ^(CCTableView *table){    //if the press a cell, do something here.    //NSLog(@"Cell %ld", (long)table.selectedRow); }; [self addChild: tblScores]; With the CCTableView object's data source being set to self we can add the three methods that will determine exactly how our table looks and what data goes in each cell (that is, row). Note that if we don't set the data source, the table view's method will not be called; and if we set it to anything other than self, the methods will be called on that object/class instead. That being" said, add these three methods: -(CCTableViewCell*)tableView:(CCTableView *)tableView nodeForRowAtIndex:(NSUInteger)index { CCTableViewCell* cell = [CCTableViewCell node]; cell.contentSizeType = CCSizeTypeMake(CCSizeUnitNormalized, CCSizeUnitPoints); cell.contentSize = CGSizeMake(1, 40); // Color every other row differently CCNodeColor* bg; if (index % 2 != 0) bg = [CCNodeColor nodeWithColor:[CCColor colorWithRed:0 green:0 blue:0 alpha:0.3]]; else bg = [CCNodeColor nodeWithColor: [CCColor colorWithRed:0 green:0 blue:0 alpha:0.2]]; bg.userInteractionEnabled = NO; bg.contentSizeType = CCSizeTypeNormalized; bg.contentSize = CGSizeMake(1, 1); [cell addChild:bg]; return cell; }   -(NSUInteger)tableViewNumberOfRows:(CCTableView *)tableView { return [arrScores count]; }   -(float)tableView:(CCTableView *)tableView heightForRowAtIndex:(NSUInteger)index { return 40.f; } The first method, tableView:nodeForRowAtIndex:, will format each cell "based on which index it is. For now, we're going to color each cell in one of two different colors. The second method, tableViewNumberOfRows:, returns the number of rows, "or cells, that will be in the table view. Since we know there are going to be 20, "we can technically type 20, but what if we decide to change that number later? "So, let's stick with using the count of the array. The third "method, tableView:heightForRowAtIndex:, is meant to return the height of the row, or cell, at the given index. Since we aren't doing anything different with any cell in particular, we can hardcode this value to a fairly reasonable height of 40. At this point, you should be able to run the game, and when you lose, you'll be taken to the game over screen with the labels across the top as well as a table that scrolls on the right side of the screen. It's good practice when learning Cocos2d to just mess around with stuff to see what sort of effects you can make. For example, you could try using some ScaleTo actions to scale the text up from 0, or use a MoveTo action to slide it from the bottom or the side. Feel free to see whether you can create a cool way to display the text right now. Now that we have the table in place, let's get the data displayed, shall we? Showing the scores Now that "we have our table created, it's a simple addition to our code to get the proper numbers to display correctly. In the nodeForRowAtIndex method, add the following block of code right after adding the background color to the cell: //Create the 4 labels that will be used within the cell (row). CCLabelBMFont *lblScoreNumber = [CCLabelBMFont labelWithString: [NSString stringWithFormat:@"%d)", index+1] fntFile:@"bmFont.fnt"]; //Set the anchor point to the middle-right (default middle-middle) lblScoreNumber.anchorPoint = ccp(1,0.5); CCLabelBMFont *lblTotalScore = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTotalScore] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblUnitsKilled = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictUnitsKilled] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTurnsSurvived = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTurnsSurvived] integerValue]] fntFile:@"bmFont.fnt"]; //set the position type of each label to normalized (where (0,0) is the bottom left of its parent and (1,1) is the top right of its parent) lblScoreNumber.positionType = lblTotalScore.positionType = lblUnitsKilled.positionType = lblTurnsSurvived.positionType = CCPositionTypeNormalized;   //position all of the labels within the cell lblScoreNumber.position = ccp(0.15,0.5); lblTotalScore.position = ccp(0.35,0.5); lblUnitsKilled.position = ccp(0.6,0.5); lblTurnsSurvived.position = ccp(0.9,0.5); //if the index we're iterating through is the same index as our High Score index... if (index == highScoreIndex) { //then set the color of all the labels to a golden color    lblScoreNumber.color =    lblTotalScore.color =    lblUnitsKilled.color =    lblTurnsSurvived.color = [CCColor colorWithRed:1 green:183/255.f blue:0]; } //add all of the labels to the individual cell [cell addChild:lblScoreNumber]; [cell addChild:lblTurnsSurvived]; [cell addChild:lblTotalScore]; [cell addChild:lblUnitsKilled]; And that's it! When you play the game and end up at the game over screen, you'll see the high scores being displayed (even the scores from earlier attempts, because they were saved, remember?). Notice the high score that is yellow. It's an indication that the score you got in the game you just played is on the scoreboard, and shows you where it is. Although the CCTableView might feel a bit weird with things disappearing and reappearing as you scroll, let's get some Threes!—like sliding into our game. If you're considering adding a CCTableView to your own project, the key takeaway here is to make sure you modify the contentSize and position properly. By default, the contentSize is a normalized CGSize, so from 0 to 1, and the anchor point is (0,0). Plus, make sure you perform these two steps: Set the data source of the table view Add the three table view methods With all that in mind, it should be relatively easy to implement a CCTableView. Adding subtle sliding to the units If you've ever played Threes! (or if you haven't, check out the trailer at http://asherv.com/threes/, and maybe even download the game on your phone), you would be aware of the sliding feature when a user begins to make "their move but hasn't yet completed the move. At the speed of the dragging finger, the units slide in the direction they're going to move, showing the user where each unit will go and how each unit will combine with another. This is useful as it not only adds that extra layer of "cool factor" but also provides a preview of the future for the user if they want to revert their decision ahead of time and make a different, more calculated move. Here's a side note: if you want your game to go really viral, you have to make the user believe it was their fault that they lost, and not your "stupid game mechanics" (as some players might say). Think Angry Birds, Smash Hit, Crossy Road, Threes!, Tiny Wings… the list goes on and on with more games that became popular, and all had one underlying theme: when the user loses, it was entirely in their control to win or lose, and they made the wrong move. This" unseen mechanic pushes players to play again with a better strategy in mind. And this is exactly why we want our users to see their move before it gets made. It's a win-win situation for both the developers and the players. Sliding one unit If we can get one unit to slide, we can surely get the rest of the units to slide by simply looping through them, modularizing the code, or some other form of generalization. That being said, we need to set up the Unit class so that it can detect how far "the finger has dragged. Thus, we can determine how far to move the unit. So, "open Unit.h and add the following variable. It will track the distance from the previous touch position: @property (nonatomic, assign) CGPoint previousTouchPos; Then, in the touchMoved method of Unit.m, add the following assignment to previousTouchPos. It sets the previous touch position to the touch-down position, but only after the distance is greater than 20 units: if (!self.isBeingDragged && ccpDistance(touchPos, self.touchDownPos) > 20) { self.isBeingDragged = YES; //add it here: self.previousTouchPos = self.touchDownPos; Once that's in place, we can begin calculating the distance while the finger is being dragged. To do that, we'll do a simple check. Add the following block of code at the end of touchMoved, after the end of the initial if block: //only if the unit is currently being dragged if (self.isBeingDragged) {    CGFloat dist = 0;    //if the direction the unit is being dragged is either UP or "     DOWN    if (self.dragDirection == DirUp || self.dragDirection == DirDown)    //then subtract the current touch position's Y-value from the "     previously-recorded Y-value to determine the distance to "     move      dist = touchPos.y - self.previousTouchPos.y;      //else if the direction the unit is being dragged is either "       LEFT or RIGHT    else if (self.dragDirection == DirLeft ||        self.dragDirection == DirRight)        //then subtract the current touch position's Y-value from "         the previously-recorded Y-value to determine the "         distance to move      dist = touchPos.x - self.previousTouchPos.x;   //then assign the touch position for the next iteration of touchMoved to work properly self.previousTouchPos = touchPos;   } The "assignment of previousTouchPos at the end will ensure that while the unit is being dragged, we continue to update the touch position so that we can determine the distance. Plus, the distance is calculated in only the direction in which the unit is being dragged (up and down are denoted by Y, and left and right are denoted by X). Now that we have the distance between finger drags being calculated, let's push "this into a function that will move our unit based on which direction it's being dragged in. So, right after you've calculated dist in the previous code block, "call the following method to move our unit based on the amount dragged: dist /= 2; //optional [self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Dividing the distance by 2 is optional. You may think the squares are too small, and want the user to be able to see their square. So note that dividing by 2, or a larger number, will mean that for every 1 point the finger moves, the unit will move by 1/2 (or less) points. With that method call being ready, we need to implement it, so add the following method body for now. Since this method is rather complicated, it's going to be "added in parts: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { } The first thing "we need to do is set up a variable to calculate the new x and y positions of the unit. We'll call these newX and newY, and set them to the unit's current position: CGFloat newX = self.position.x, newY = self.position.y; Next, we want to grab the position that the unit starts at, that is, the position the "unit would be at if it was positioned at its current grid coordinate. To do that, "we're going to call the getPositionForGridCoordinate method from MainScene, (since that's where the positions are being calculated anyway, we might as well use that function): CGPoint originalPos = [MainScene "getPositionForGridCoord:self.gridPos]; Next, we're going to move the newX or newY based on the direction in which the unit is being dragged. For now, let's just add the up direction: if (self.dragDirection == DirUp) {    newY += dist;    if (newY > originalPos.y + self.gridWidth)      newY = originalPos.y + self.gridWidth;    else if (newY < originalPos.y)      newY = originalPos.y; } In this if block, we're first going to add the distance to the newY variable "(because we're going up, we're adding to Y instead of X). Then, we want to "make sure the position is at most 1 square up. We're going to use the gridWidth (which is essentially the width of the square, assigned in the initCommon method). Also, we need to make sure that if they're bringing the square back to its original position, it doesn't go into the square beneath it. So let's add the rest of the directions as else if statements: else if (self.dragDirection == DirDown) {    newY += dist;    if (newY < originalPos.y - self.gridWidth)      newY = originalPos.y - self.gridWidth;    else if (newY > originalPos.y)      newY = originalPos.y; } else if (self.dragDirection == DirLeft) {    newX += dist;    if (newX < originalPos.x - self.gridWidth)      newX = originalPos.x - self.gridWidth;    else if (newX > originalPos.x)      newX = originalPos.x; } else if (self.dragDirection == DirRight) {    newX += dist;    if (newX > originalPos.x + self.gridWidth)      newX = originalPos.x + self.gridWidth;    else if (newX < originalPos.x)      newX = originalPos.x; } Finally, we "will set the position of the unit based on the newly calculated "x and y positions: self.position = ccp(newX, newY); Running the game at this point should cause the unit you drag to slide along "with your finger. Nice, huh? Since we have a function that moves one unit, "we can very easily alter it so that every unit can be moved like this. But first, there's something you've probably noticed a while ago (or maybe just recently), and that's the unit movement being canceled only when you bring your finger back to the original touch down position. Because we're dragging the unit itself, we can "cancel" the move by dragging the unit back to where it started. However, the finger might be in a completely different position, so we need to modify how the cancelling gets determined. To do that, in your touchEnded method of Unit.m, locate this if statement: if (ccpDistance(touchPos, self.touchDownPos) > "self.boundingBox.size.width/2) Change it to the following, which will determine the unit's distance, and not the finger's distance: CGPoint oldSelfPos = [MainScene "getPositionForGridCoord:self.gridPos];   CGFloat dist = ccpDistance(oldSelfPos, self.position); if (dist > self.gridWidth/2) Yes, this means you no longer need the touchPos variable in touchEnded if "you're getting that "warning and wish to get rid of it. But that's it for sliding 1 unit. Now we're ready to slide all the units, so let's do it! Sliding all units Now "that we have the dragging unit being slid, let's continue and make all the units slide (even the enemy units so that we can better predict our troops' movement). First, we need a way to move all the units on the screen. However, since the Unit class only contains information about the individual unit (which is a good thing), "we need to call a method in MainScene, since that's where the arrays of units are. Moreover, we cannot simply call [MainScene method], since the arrays are "instance variables, and instance variables must be accessed through an instance "of the object itself. That being said, because we know that our unit will be added to the scene as "a child, we can use Cocos2d to our advantage, and call an instance method on the MainScene class via the parent parameter. So, in touchMoved of Unit.m, make the following change: [(MainScene*)self.parent slideAllUnitsWithDistance:dist "withDragDirection:self.dragDirection]; //[self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Basically we've commented out (or deleted) the old method call here, and instead called it on our parent object (which we cast as a MainScene so that we know "which functions it has). But we don't have that method created yet, so in MainScene.h, add the following method declaration: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir; Just in case you haven't noticed, the enum UnitDirection is declared in Unit.h, which is why MainScene.h imports Unit.h—so that we can make use of that enum in this class, and the function to be more specific. Then in MainScene.m, we're going to loop through both the friendly and enemy arrays, and call the slideUnitWithDistance function on each individual unit: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir { for (Unit *u in arrFriendlies)    [u slideUnitWithDistance:dist withDragDirection:dir]; for (Unit *u in arrEnemies)    [u slideUnitWithDistance:dist withDragDirection:dir]; } However, that" still isn't functional, as we haven't declared that function in the "header file for the Unit class. So go ahead and do that now. Declare the function header in Unit.h: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum "UnitDirection)dir; We're almost done. We initially set up our slideUnitWithDistance method with a drag direction in mind. However, only the unit that's currently being dragged will have a drag direction. Every other unit will need to use the direction it's currently facing "(that is, the direction in which it's already going). To do that, we just need to modify how the slideUnitWithDistance method does its checking to determine which direction to modify the distance by. But first, we need to handle the negatives. What does that mean? Well, if you're dragging a unit to the left and a unit being moved is supposed to be moving to the left, it will work properly, as x-10 (for example) will still be less than the grid's width. However, if you're dragging left and a unit being moved is supposed to be moving right, it won't be moving at all, as it tries to add a negative value x -10, but because it needs to be moving to the right, it'll encounter the left-bound right away (of less than the original position), and stay still. The following diagram should help explain what is meant by "handling negatives." As you can see, in the top section, when the non-dragged unit is supposed to be going left by 10 (in other words, negative 10 in the x direction), it works. But when the non-dragged unit is going the opposite sign (in other words, positive 10 in the x direction), it doesn't. To" handle this, we set up a pretty complicated if statement. It checks when the drag direction and the unit's own direction are opposite (positive versus negative), and multiplies the distance by -1 (flips it). Add this to the top of the slideUnitWithDistance method, right after you grab the newX and the original position: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { CGFloat newX = self.position.x, newY = self.position.y; CGPoint originalPos = [MainScene getPositionForGridCoord:self.gridPos]; if (!self.isBeingDragged &&   (((self.direction == DirUp || self.direction == DirRight) && (dir == DirDown || dir == DirLeft)) ||   ((self.direction == DirDown || self.direction == DirLeft) && (dir == DirUp || dir == DirRight)))) {    dist *= -1; } } The logic of this if statement works is as follows: Suppose the unit is not being dragged. Also suppose that either the direction is positive and the drag direction is negative, or the direction is negative and the drag direction is positive. Then multiply by -1. Finally, as mentioned earlier, we just need to handle the non-dragged units. So, in every if statement, add an "or" portion that will check for the same direction, but only if the unit is not currently being dragged. In other words, in the slideUnitWithDistance method, modify your if statements to look like this: if (self.dragDirection == DirUp || (!self.isBeingDragged && self.direction == DirUp)) {} else if (self.dragDirection == DirDown || (!self.isBeingDragged && self.direction == DirDown)) {} else if (self.dragDirection == DirLeft || (!self.isBeingDragged && self.direction == DirLeft)) {} else if (self.dragDirection == DirRight || (!self.isBeingDragged && self.direction == DirRight)) {} Finally, we can run the game. Bam! All the units go gliding across the screen with our drag. Isn't it lovely? Now the player can better choose their move. That's it for the sliding portion. The key to unit sliding is to loop through the arrays to ensure that all the units get moved by an equal amount, hence passing the distance to the move function. Creating movements on a Bézier curve If you don't know what a Bézier curve is, it's basically a line that goes from point A to point B over a curve. Instead of being a straight line with two points, it uses a second set of points called control points that bend the line in a smooth way. When you want to apply movement with animations in Cocos2d, it's very tempting to queue up a bunch of MoveTo actions in a sequence. However, it's going to look a lot nicer ( in both the game and the code) if you use a smoother Bézier curve animation. Here's a good example of what a Bézier curve looks like: As you can see, the red line goes from point P0 to P3. However, the line is influenced in the direction of the control points, P1 and P2. Examples of using a Bézier curve Let's list" a few examples where it would be a good choice to use a Bézier curve instead of just the regular MoveTo or MoveBy actions: A character that will perform a jumping animation, for example, in Super Mario Bros A boomerang as a weapon that the player throws Launching a missile or rocket and giving it a parabolic curve A tutorial hand that indicates a curved path the user must make with their finger A skateboarder on a half-pipe ramp (if not done with Chipmunk) There are obviously a lot of other examples that could use a Bézier curve for their movement. But let's actually code one, shall we? Sample project – Bézier map route First, to make things go a lot faster—as this isn't going to be part of the book's project—simply download the project from the code repository or the website. If you open the project and run it on your device or a simulator, you will notice a blue screen and a square in the bottom-left corner. If you tap anywhere on the screen, you'll see the blue square make an M shape ending in the bottom-right corner. If you hold your finger, it will repeat. Tap again and the animation will reset. Imagine the path this square takes is over a map, and indicates what route a player will travel with their character. This is a very choppy, very sharp path. Generally, paths are curved, so let's make one that is! Here is a screenshot that shows a very straight path of the blue square: The following screenshot shows the Bézier path of the yellow square: Curved M-shape Open MainScene.h and add another CCNodeColor variable, named unitBezier: CCNodeColor *unitBezier; Then open MainScene.m and add the following code to the init method so that your yellow block shows up on the screen: unitBezier = [[CCNodeColor alloc] initWithColor:[CCColor colorWithRed:1 green:1 blue:0] width:50 height:50]; [self addChild:unitBezier]; CCNodeColor *shadow2 = [[CCNodeColor alloc] initWithColor:[CCColor blackColor] width:50 height:50]; shadow2.anchorPoint = ccp(0.5,0.5); shadow2.position = ccp(26,24); shadow2.opacity = 0.5; [unitBezier addChild:shadow2 z:-1]; Then, in the sendFirstUnit method, add the lines of code that will reset the yellow block's position as well as queue up the method to move the yellow block: -(void)sendFirstUnit { unitRegular.position = ccp(0,0); //Add these 2 lines unitBezier.position = ccp(0,0); [self scheduleOnce:@selector(sendSecondUnit) delay:2]; CCActionMoveTo *move1 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/4, winSize.height * 0.75)]; CCActionMoveTo *move2 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/2, winSize.height/4)]; CCActionMoveTo *move3 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width*3/4, winSize.height * 0.75)]; CCActionMoveTo *move4 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width - 50, 0)]; [unitRegular runAction:[CCActionSequence actions:move1, move2, "move3, move4, nil]]; } After this, you'll need to actually create the sendSecondUnit method, like this: -(void)sendSecondUnit { ccBezierConfig bezConfig1; bezConfig1.controlPoint_1 = ccp(0, winSize.height); bezConfig1.controlPoint_2 = ccp(winSize.width*3/8, "winSize.height); bezConfig1.endPosition = ccp(winSize.width*3/8, "winSize.height/2); CCActionBezierTo *bez1 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig1]; ccBezierConfig bezConfig2; bezConfig2.controlPoint_1 = ccp(winSize.width*3/8, 0); bezConfig2.controlPoint_2 = ccp(winSize.width*5/8, 0); bezConfig2.endPosition = ccp(winSize.width*5/8, winSize.height/2); CCActionBezierBy *bez2 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig2]; ccBezierConfig bezConfig3; bezConfig3.controlPoint_1 = ccp(winSize.width*5/8, "winSize.height); bezConfig3.controlPoint_2 = ccp(winSize.width, winSize.height); bezConfig3.endPosition = ccp(winSize.width - 50, 0); CCActionBezierTo *bez3 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig3]; [unitBezier runAction:[CCActionSequence actions:bez1, bez2, bez3, nil]]; } The preceding method creates three Bézier configurations and attaches them to a MoveTo command that takes a Bézier configuration. The reason for this is that each Bézier configuration can take only two control points. As you can see in this marked-up screenshot, where each white and red square represents a control point, you can make only a U-shaped parabola with a single Bézier configuration. Thus, to make three U-shapes, you need three Bézier configurations. Finally, make sure that in the touchBegan method, you make the unitBezier stop all its actions (that is, stop on reset): [unitBezier stopAllActions]; And that's it! When you run the project and tap on the screen (or tap and hold), you'll see the blue square M-shape its way across, followed by the yellow square in its squiggly M-shape. If you" want to adapt the Bézier MoveTo or MoveBy actions for your own project, you should know that you can create only one U-shape with each Bézier configuration. They're fairly easy to implement and can quickly be copied and pasted, as shown in the sendSecondUnit function. Plus, as the control points and end position are just CGPoint values, they can be relative (that is, relative to the unit's current position, the world's position, or an enemy's position), and as a regular CCAction, they can be run with any CCNode object quite easily. Summary In this article, you learned how to do a variety of things, from making a score table and previewing the next move, to making use of Bézier curves. The code was built with a copy-paste mindset, so it can be adapted for any project without much reworking (if it is required at all). Resources for Article: Further resources on this subject: Cocos2d-x: Installation [article] Dragging a CCNode in Cocos2D-Swift [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 7190

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163
Modal Close icon
Modal Close icon