Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 4767

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 3392

article-image-understanding-data-reduction-patterns
Packt
14 Apr 2014
15 min read
Save for later

Understanding Data Reduction Patterns

Packt
14 Apr 2014
15 min read
(For more resources related to this topic, see here.) Data reduction – a quick introduction Data reduction aims to obtain a reduced representation of the data. It ensures data integrity, though the obtained dataset after the reduction is much smaller in volume than the original dataset. Data reduction techniques are classified into the following three groups: Dimensionality reduction: This group of data reduction techniques deals with reducing the number of attributes that are considered for an analytics problem. They do this by detecting and eliminating the irrelevant attributes, relevant yet weak attributes, or redundant attributes. The principal component analysis and wavelet transforms are examples of dimensionality reduction techniques. Numerosity reduction: This group of data reduction techniques reduces the data by replacing the original dataset with a sparse representation of the data. The sparse subset of the data is computed by parametric methods such as regression, where a model is used to estimate the data so that only a subset is enough instead of the entire dataset. There are other methods such as nonparametric methods, for example, clustering, sampling, and histograms, which work without the need for a model to be built. Compression: This group of data reduction techniques uses algorithms to reduce the size of the physical storage that the data consumes. Typically, compression is performed at a higher level of granularity than at the attribute or record level. If you need to retrieve the original data from the compressed data without any loss of information, which is required while storing string or numerical data, a lossless compression scheme is used. If instead, there is a need to uncompress video and sound files that can accommodate the imperceptible loss of clarity, then lossy compression techniques are used. The following diagram illustrates the different techniques that are used in each of the aforementioned groups: Data reduction techniques – overview Data reduction considerations for Big Data In Big Data problems, data reduction techniques have to be considered as part of the analytics process rather than a separate process. This will enable you to understand what type of data has to be retained or eliminated due to its irrelevance to the analytics-related questions that are asked. In a typical Big Data analytical environment, data is often acquired and integrated from multiple sources. Even though there is the promise of a hidden reward for using the entire dataset for the analytics, which in all probability may yield richer and better insights, the cost of doing so sometimes overweighs the results. It is at this juncture that you may have to consider reducing the amount of data without drastically compromising on the effectiveness of the analytical insights, in essence, safeguarding the integrity of the data. Performing any type of analysis on Big Data often leads to high storage and retrieval costs owing to the massive amount of data. The benefits of data reduction processes are sometimes not evident when the data is small; they begin to become obvious when the datasets start growing in size. These data reduction processes are one of the first steps that are taken to optimize data from the storage and retrieval perspective. It is important to consider the ramifications of data reduction so that the computational time spent on it does not outweigh or erase the time saved by data mining on a reduced dataset size. Now that we have understood data reduction concepts, we will explore a few concrete design patterns in the following sections. Dimensionality reduction – the Principal Component Analysis design pattern In this design pattern, we will consider one way of implementing the dimensionality reduction through the usage of Principal Component Analysis (PCA) and Singular value decomposition (SVD), which are versatile techniques that are widely used for exploratory data analysis, creating predictive models, and for dimensionality reduction. Background Dimensions in a given data can be intuitively understood as a set of all attributes that are used to account for the observed properties of data. Reducing the dimensionality implies the transformation of a high dimensional data into a reduced dimension's set that is proportional to the intrinsic or latent dimensions of the data. These latent dimensions are the minimum number of attributes that are needed to describe the dataset. Thus, dimensionality reduction is a method to understand the hidden structure of data that is used to mitigate the curse of high dimensionality and other unwanted properties of high dimensional spaces. Broadly, there are two ways to perform dimensionality reduction; one is linear dimensionality reduction for which PCA and SVD are examples. The other is nonlinear dimensionality reduction for which kernel PCA and Multidimensional Scaling are examples. In this design pattern, we explore linear dimensionality reduction by implementing PCA in R and SVD in Mahout and integrating them with Pig. Motivation Let's first have an overview of PCA. PCA is a linear dimensionality reduction technique that works unsupervised on a given dataset by implanting the dataset into a subspace of lower dimensions, which is done by constructing a variance-based representation of the original data. The underlying principle of PCA is to identify the hidden structure of the data by analyzing the direction where the variation of data is the most or where the data is most spread out. Intuitively, a principal component can be considered as a line, which passes through a set of data points that vary to a greater degree. If you pass the same line through data points with no variance, it implies that the data is the same and does not carry much information. In cases where there is no variance, data points are not considered as representatives of the properties of the entire dataset, and these attributes can be omitted. PCA involves finding pairs of eigenvalues and eigenvectors for a dataset. A given dataset is decomposed into pairs of eigenvectors and eigenvalues. An eigenvector defines the unit vector or the direction of the data perpendicular to the others. An eigenvalue is the value of how spread out the data is in that direction. In multidimensional data, the number of eigenvalues and eigenvectors that can exist are equal to the dimensions of the data. An eigenvector with the biggest eigenvalue is the principal component. After finding out the principal component, they are sorted in the decreasing order of eigenvalues so that the first vector shows the highest variance, the second shows the next highest, and so on. This information helps uncover the hidden patterns that were not previously suspected and thereby allows interpretations that would not result ordinarily. As the data is now sorted in the decreasing order of significance, the data size can be reduced by eliminating the attributes with a weak component, or low significance where the variance of data is less. Using the highly valued principal components, the original dataset can be constructed with a good approximation. As an example, consider a sample election survey conducted on a hundred million people who have been asked 150 questions about their opinions on issues related to elections. Analyzing a hundred million answers over 150 attributes is a tedious task. We have a high dimensional space of 150 dimensions, resulting in 150 eigenvalues/vectors from this space. We order the eigenvalues in descending order of significance (for example, 230, 160, 130, 97, 62, 8, 6, 4, 2,1… up to 150 dimensions). As we can decipher from these values, there can be 150 dimensions, but only the top five dimensions possess the data that is varying considerably. Using this, we were able to reduce a high dimensional space of 150 and could consider the top five eigenvalues for the next step in the analytics process. Next, let's look into SVD. SVD is closely related to PCA, and sometimes both terms are used as SVD, which is a more general method of implementing PCA. SVD is a form of matrix analysis that produces a low-dimensional representation of a high-dimensional matrix. It achieves data reduction by removing linearly dependent data. Just like PCA, SVD also uses eigenvalues to reduce the dimensionality by combining information from several correlated vectors to form basis vectors that are orthogonal and explains most of the variance in the data. For example, if you have two attributes, one is sale of ice creams and the other is temperature, then their correlation is so high that the second attribute, temperature, does not contribute any extra information useful for a classification task. The eigenvalues derived from SVD determines which attributes are most informative and which ones you can do without. Mahout's Stochastic SVD (SSVD) is based on computing mathematical SVD in a distributed fashion. SSVD runs in the PCA mode if the pca argument is set to true; the algorithm computes the column-wise mean over the input and then uses it to compute the PCA space. Use cases You can consider using this pattern to perform data reduction, data exploration, and as an input to clustering and multiple regression. The design pattern can be applied on ordered and unordered attributes with sparse and skewed data. It can also be used on images. This design pattern cannot be applied on complex nonlinear data. Pattern implementation The following steps describe the implementation of PCA using R: The script applies the PCA technique to reduce dimensions. PCA involves finding pairs of eigenvalues and eigenvectors for a dataset. An eigenvector with the biggest eigenvalue is the principal component. The components are sorted in the decreasing order of eigenvalues. The script loads the data and uses streaming to call the R script. The R script performs PCA on the data and returns the principal components. Only the first few principal components that can explain most of the variation can be selected so that the dimensionality of the data is reduced. Limitations of PCA implementation While streaming allows you to call the executable of your choice, it has performance implications, and the solution is not scalable in situations where your input dataset is huge. To overcome this, we have shown a better way of performing dimensionality reduction by using Mahout; it contains a set of highly scalable machine learning libraries. The following steps describe the implementation of SSVD on Mahout: Read the input dataset in the CSV format and prepare a set of data points in the form of key/value pairs; the key should be unique and the value should comprise of n vector tuples. Write the previous data into a sequence file. The key can be of a type adapted into WritableComparable, Long, or String, and the value should be of the VectorWritable type. Decide on the number of dimensions in the reduced space. Execute SSVD on Mahout with the rank arguments (this specifies the number of dimensions), setting pca, us, and V to true. When the pca argument is set to true, the algorithm runs in the PCA mode by computing the column-wise mean over the input and then uses it to compute the PCA space. The USigma folder contains the output with reduced dimensions. Generally, dimensionality reduction is applied on very high dimensional datasets; however, in our example, we have demonstrated this on a dataset with fewer dimensions for a better explainability. Code snippets To illustrate the working of this pattern, we have considered the retail transactions dataset that is stored on the Hadoop File System (HDFS). It contains 20 attributes, such as Transaction ID, Transaction date, Customer ID, Product subclass, Phone No, Product ID, age, quantity, asset, Transaction Amount, Service Rating, Product Rating, and Current Stock. For this pattern, we will be using PCA to reduce the dimensions. The following code snippet is the Pig script that illustrates the implementation of this pattern via Pig streaming: /* Assign an alias pcar to the streaming command Use ship to send streaming binary files(R script in this use case) from the client node to the compute node */ DEFINE pcar '/home/cloudera/pdp/data_reduction/compute_pca.R' ship('/home/cloudera/pdp/data_reduction/compute_pca.R'); /* Load the data set into the relation transactions */ transactions = LOAD '/user/cloudera/pdp/datasets/data_reduction/transactions_multi_dims.csv'USING PigStorage(',') AS (transaction_id:long,transaction_date:chararray, customer_id:chararray,prod_subclass:chararray, phone_no:chararray, country_code:chararray,area:chararray, product_id:chararray,age:int, amt:int, asset:int, transaction_amount:double, service_rating:int,product_rating:int, curr_stock:int, payment_mode:int, reward_points:int,distance_to_store:int, prod_bin_age:int, cust_height:int); /* Extract the columns on which PCA has to be performed. STREAM is used to send the data to the external script. The result is stored in the relation princ_components */ selected_cols = FOREACH transactions GENERATEage AS age, amt AS amount, asset AS asset, transaction_amount AStransaction_amount, service_rating AS service_rating,product_rating AS product_rating, curr_stock AS current_stock,payment_mode AS payment_mode, reward_points AS reward_points,distance_to_store AS distance_to_store, prod_bin_age AS prod_bin_age,cust_height AS cust_height; princ_components = STREAM selected_cols THROUGH pcar; /* The results are stored on the HDFS in the directory pca */ STORE princ_components INTO '/user/cloudera/pdp/output/data_reduction/pca'; Following is the R code illustrating the implementation of this pattern: #! /usr/bin/env Rscript options(warn=-1) #Establish connection to stdin for reading the data con <- file("stdin","r") #Read the data as a data frame data <- read.table(con, header=FALSE, col.names=c("age", "amt", "asset","transaction_amount", "service_rating", "product_rating","current_stock", "payment_mode", "reward_points","distance_to_store", "prod_bin_age","cust_height")) attach(data) #Calculate covariance and correlation to understandthe variation between the independent variables covariance=cov(data, method=c("pearson")) correlation=cor(data, method=c("pearson")) #Calculate the principal components pcdat=princomp(data) summary(pcdat) pcadata=prcomp(data, scale = TRUE) pcadata The ensuing code snippets illustrate the implementation of this pattern using Mahout's SSVD. The following is a snippet of a shell script with the commands for executing CSV to the sequence converter: #All the mahout jars have to be included inHADOOP_CLASSPATH before execution of this script. #Execute csvtosequenceconverter jar to convert the CSV file to sequence file. hadoop jar csvtosequenceconverter.jar com.datareduction.CsvToSequenceConverter/user/cloudera/pdp/datasets/data_reduction/transactions_multi_dims_ssvd.csv /user/cloudera/pdp/output/data_reduction/ssvd/transactions.seq The following is the code snippet of the Pig script with commands for executing SSVD on Mahout: /* Register piggybank jar file */ REGISTER '/home/cloudera/pig-0.11.0/contrib/piggybank/java/piggybank.jar'; /* *Ideally the following data pre-processing steps have to be generallyperformed on the actual data, we have deliberatelyomitted the implementation as these steps werecovered in the respective chapters *Data Ingestion to ingest data from the required sources *Data Profiling by applying statistical techniquesto profile data and find data quality issues *Data Validation to validate the correctness ofthe data and cleanse it accordingly *Data Transformation to apply transformations on the data. */ /* Use sh command to execute shell commands. Convert the files in a directory to sequence files -i specifies the input path of the sequence file on HDFS -o specifies the output directory on HDFS -k specifies the rank, i.e the number of dimensions in the reduced space -us set to true computes the product USigma -V set to true computes V matrix -pca set to true runs SSVD in pca mode */ sh /home/cloudera/mahout-distribution-0.8/bin/mahoutssvd -i /user/cloudera/pdp/output/data_reduction/ssvd/transactions.seq -o /user/cloudera/pdp/output/data_reduction/ssvd/reduced_dimensions -k 7 -us true -V true -U false -pca true -ow -t 1 /* Use seqdumper to dump the output in text format. -i specifies the HDFS path of the input file */ sh /home/cloudera/mahout-distribution-0.8/bin/mahout seqdumper -i /user/cloudera/pdp/output/data_reduction/ssvd/reduced_dimensions/V/v-m-00000 Results The following is a snippet of the result of executing the R script through Pig streaming. Only the important components in the results are shown to improve readability. Importance of components: Comp.1 Comp.2 Comp.3 Standard deviation 1415.7219657 548.8220571 463.15903326 Proportion of Variance 0.7895595 0.1186566 0.08450632 Cumulative Proportion 0.7895595 0.9082161 0.99272241 The following diagram shows a graphical representation of the results: PCA output From the cumulative results, we can explain most of the variation with the first three components. Hence, we can drop the other components and still explain most of the data, thereby achieving data reduction. The following is a code snippet of the result attained after applying SSVD on Mahout: Key: 0: Value: {0:6.78114976729216E-5,1:-2.1865954292525495E-4,2:-3.857078959222571E-5,3:9.172780131217343E-4,4:-0.0011674781643860148,5:-0.5403803571549012,6:0.38822546035077155} Key: 1: Value: {0:4.514870142377153E-6,1:-1.2753047299542729E-5,2:0.002010945408634006,3:2.6983823401328314E-5,4:-9.598021198119562E-5,5:-0.015661212194480658,6:-0.00577713052974214} Key: 2: Value: {0:0.0013835831436886054,1:3.643672803676861E-4,2:0.9999962672043754,3:-8.597640675661196E-4,4:-7.575051881399296E-4,5:2.058878196540628E-4,6:1.5620427291943194E-5} . . Key: 11: Value: {0:5.861358116239576E-4,1:-0.001589570485260711,2:-2.451436184622473E-4,3:0.007553283166922416,4:-0.011038688645296836,5:0.822710349440101,6:0.060441819443160294} The contents of the V folder show the contribution of the original variables to every principal component. The result is a 12 x 7 matrix as we have 12 dimensions in our original dataset, which were reduced to 7, as specified in the rank argument to SSVD. The USigma folder contains the output with reduced dimensions.
Read more
  • 0
  • 0
  • 19397

Packt
14 Apr 2014
4 min read
Save for later

The Fabric library – the deployment and development task manager

Packt
14 Apr 2014
4 min read
(For more resources related to this topic, see here.) Essentially, Fabric is a tool that allows the developer to execute arbitrary Python functions via the command line and also a set of functions in order to execute shell commands on remote servers via SSH. Combining these two things together offers developers a powerful way to administrate the application workflow without having to remember the series of commands that need to be executed on the command line. The library documentation can be found at http://fabric.readthedocs.org/. Installing the library in PTVS is straightforward. Like all other libraries, to insert this library into a Django project, right-click on the Python 2.7 node in Python Environments of the Solution Explorer window. Then, select the Install Python Package entry. The Python environment contextual menu Clicking on it brings up the Install Python Package modal window as shown in the following screenshot: It's important to use easy_install to download from the Python package index. This will bring the precompiled versions of the library into the system instead of the plain Python C libraries that have to be compiled on the system. Once the package is installed in the system, you can start creating tasks that can be executed outside your application from the command line. First, create a configuration file, fabfile.py, for Fabric. This file contains the tasks that Fabric will execute. The previous screenshot shows a really simple task: it prints out the string hello world once it's executed. You can execute it from the command prompt by using the Fabric command fab, as shown in the following screenshot: Now that you know that the system is working fine, you can move on to the juicy part where you can make some tasks that interact with a remote server through ssh. Create a task that connects to a remote machine and find out the type of OS that runs on it. The env object provides a way to add credentials to Fabric in a programmatic way We have defined a Python function, host_type, that runs a POSIX command, uname–s, on the remote. We also set up a couple of variables to tell Fabric which is the remote machine we are connecting to, i.e. env.hosts, and the password that has to be used to access that machine, i.e. env.password. It's never a good idea to put plain passwords into the source code, as is shown in the preceding screenshot example. Now, we can execute the host_typetask in the command line as follows: The Fabric library connects to the remote machine with the information provided and executes the command on the server. Then, it brings back the result of the command itself in the output part of the response. We can also create tasks that accept parameters from the command line. Create a task that echoes a message on the remote machine, starting with a parameter as shown in the following screenshot: The following are two examples of how the task can be executed: We can also create a helper function that executes an arbitrary command on the remote machine as follows: def execute(cmd): run(cmd) We are also able to upload a file into the remote server by using put: The first argument of put is the local file you want to upload and the second one is the destination folder's filename. Let's see what happens: Deploying process with Fabric The possibilities of using Fabric are really endless, since the tasks can be written in plain Python language. This provides the opportunity to automate many operations and focus more on the development instead of focusing on how to deploy your code to servers to maintain them. Summary This article provided you with an in-depth look at remote task management and schema migrations using the third-party Python library Fabric. Resources for Article: Further resources on this subject: Through the Web Theming using Python [Article] Web Scraping with Python [Article] Python Data Persistence using MySQL [Article]
Read more
  • 0
  • 0
  • 2313

article-image-creating-3d-world-roam
Packt
11 Apr 2014
5 min read
Save for later

Creating a 3D world to roam in

Packt
11 Apr 2014
5 min read
(For more resources related to this topic, see here.) We may be able to create models and objects within our 3D space, as well as generate backgrounds, but we may also want to create a more interesting environment within which to place them. 3D terrain maps provide an elegant way to define very complex landscapes. The terrain is defined using a grayscale image to set the elevation of the land. The following example shows how we can define our own landscape and simulate flying over it, or even walk on its surface: A 3D landscape generated from a terrain map Getting ready You will need to place the Map.png file in the pi3d/textures directory of the Pi3D library. Alternatively, you can use one of the elevation maps already present—replace the reference to Map.png for another one of the elevation maps, such as testislands.jpg. How to do it… Create the following 3dWorld.py script: #!/usr/bin/python3 from __future__ import absolute_import, division from __future__ import print_function, unicode_literals """ An example of generating a 3D environment using a elevation map """ from math import sin, cos, radians import demo import pi3d DISPLAY = pi3d.Display.create(x=50, y=50) #capture mouse and key presses inputs=pi3d.InputEvents() def limit(value,min,max): if (value < min): value = min elif (value > max): value = max return value def main(): CAMERA = pi3d.Camera.instance() tex = pi3d.Texture("textures/grass.jpg") flatsh = pi3d.Shader("uv_flat") # Create elevation map mapwidth,mapdepth,mapheight=200.0,200.0,50.0 mymap = pi3d.ElevationMap("textures/Map.png", width=mapwidth, depth=mapdepth, height=mapheight, divx=128, divy=128, ntiles=20) mymap.set_draw_details(flatsh, [tex], 1.0, 1.0) rot = 0.0 # rotation of camera tilt = 0.0 # tilt of camera height = 20 viewhight = 4 sky = 200 xm,ym,zm = 0.0,height,0.0 onGround = False # main display loop while DISPLAY.loop_running() and not inputs.key_state("KEY_ESC"): inputs.do_input_events() #Note:Some mice devices will be located on #get_mouse_movement(1) or (2) etc. mx,my,mv,mh,md=inputs.get_mouse_movement() rot -= (mx)*0.2 tilt -= (my)*0.2 CAMERA.reset() CAMERA.rotate(-tilt, rot, 0) CAMERA.position((xm,ym,zm)) mymap.draw() if inputs.key_state("KEY_W"): xm -= sin(radians(rot)) zm += cos(radians(rot)) elif inputs.key_state("KEY_S"): xm += sin(radians(rot)) zm -= cos(radians(rot)) elif inputs.key_state("KEY_R"): ym += 2 onGround = False elif inputs.key_state("KEY_T"): ym -= 2 ym-=0.1 #Float down! #Limit the movement xm=limit(xm,-(mapwidth/2),mapwidth/2) zm=limit(zm,-(mapdepth/2),mapdepth/2) if ym >= sky: ym = sky #Check onGround ground = mymap.calcHeight(xm, zm) + viewhight if (onGround == True) or (ym <= ground): ym = mymap.calcHeight(xm, zm) + viewhight onGround = True try: main() finally: inputs.release() DISPLAY.destroy() print("Closed Everything. END") #End How it works… Once we have defined the display, camera, textures, and shaders that we are going to use, we can define the ElevationMap object. It works by assigning a height to the terrain image based on the pixel value of selected points of the image. For example, a single line of an image will provide a slice of the ElevationMap object and a row of elevation points on the 3D surface. Mapping the map.png pixel shade to the terrain height We create an ElevationMap object by providing the filename of the image we will use for the gradient information (textures/Map.png), and we also create the dimensions of the map (width, depth, and height—which is how high the white spaces will be compared to the black spaces). The light parts of the map will create high points and the dark ones will create low points. The Map.png texture provides an example terrain map, which is converted into a three-dimensional surface. We also specify divx and divy, which determines how much detail of the terrain map is used (how many points from the terrain map are used to create the elevation surface). Finally, ntiles specifies that the texture used will be scaled to fit 20 times across the surface. Within the main DISPLAY.loop_running() section, we will control the camera, draw ElevationMap, respond to inputs, and limit movements in our space. As before, we use an InputEvents object to capture mouse movements and translate them to control the camera. We will also use inputs.key_state() to determine if W, S, R, and T have been pressed, which allow us to move forward, backwards, as well as rise up and down. To ensure that we do not fall through the ElevationMap object when we move over it, we can use mymap.calcHeight() to provide us with the height of the terrain at a specific location (x,y,z). We can either follow the ground by ensuring the camera is set to equal this, or fly through the air by just ensuring that we never go below it. When we detect that we are on the ground, we ensure that we remain on the ground until we press R to rise again. Summary In this article, we created a 3D world by covering how to define landscapes, the use of elevation maps, and the script required to respond to particular inputs and control movements in a space. Resources for Article: Further resources on this subject: Testing Your Speed [Article] Pulse width modulator [Article] Web Scraping with Python [Article]
Read more
  • 0
  • 0
  • 10551

article-image-quick-start-guide-scratch-20
Packt
10 Apr 2014
6 min read
Save for later

A Quick Start Guide to Scratch 2.0

Packt
10 Apr 2014
6 min read
(For more resources related to this topic, see here.) The anticipation of learning a new programming language can sometimes leave us frozen on the starting line, not knowing what to expect or where to start. Together, we'll take our first steps into programming with Scratch, and block-by-block, we'll create our first animation. Our work in this article will focus on getting ourselves comfortable with some fundamental concepts before we create projects in the rest of the book. Joining the Scratch community If you're planning to work with the online project editor on the Scratch website, I highly recommend you set up an account on scratch.mit.edu so that you can save your projects. If you're going to be working with the offline editor, then there is no need to create an account on the Scratch website to save your work; however, you will be required to create an account to share a project or participate in the community forums. Let's take a moment to set up an account and point out some features of the main account. That way, you can decide if creating an online account is right for you or your children at this time. Time for action – creating an account on the Scratch website Let's walk through the account creation process, so we can see what information is generally required to create a Scratch account. Open a web browser and go to http://scratch.mit.edu, and click on the link titled Join Scratch. At the time of writing this book, you will be prompted to pick a username and a password, as shown in the following screenshot. Select a username and password. If the name is taken, you'll be prompted to enter a new username. Make sure you don't use your real name. This is shown in the following screenshot: After you enter a username and password, click on Next. Then, you'll be prompted for some general demographic information, including the date of birth, gender, country, and e-mail address, as shown in the following screenshot. All fields need to be filled in. After entering all the information, click on Next. The account is now created, and you receive a confirmation screen as shown in the following screenshot: Click on the OK Let's Go! button to log in to Scratch and go to your home page. What just happened? Creating an account on the Scratch website generally does not require a lot of detailed information. The Scratch team has made an effort to maximize privacy. They strongly discourage the use of real names in user names, and for children, this is probably a wise decision. The birthday information is not publicized and is used as an account verification step while resetting passwords. The e-mail address is also not publicized and is used to reset passwords. The country and gender information is also not publically displayed and is generally just used by Scratch to identify the users of Scratch. For more information on Scratch and privacy, visit: http://scratch.mit.edu/help/faq/#privacy. Time for action – understanding the key features of your account When we log in to the Scratch website, we see our home page, as shown in the following screenshot: All the projects we create online will be saved to My Stuff. You can go to this location by clicking on the folder icon with the S on it, next to the account avatar, at the top of the page. The following screenshot shows my projects: Next to the My Stuff icon in the navigation pane is Messages, which is represented by a letter icon. This is where you'll find notifications of comments and activity on your shared projects. Clicking on this icon displays a list of messages. The next primary community feature available to the subscribed users is the Discuss page. The Discuss page shows a list of forums and topics that can be viewed by anyone; however, an account is required to be able to post on the forums or topics. What just happened? A Scratch account provides users with four primary features when they view the website: saving projects, sharing projects, receiving notifications, and participating in community discussions. When we view our saved projects in the My Stuff page, as we can see in the previous screenshot, we have the ability to See inside the project to edit it, share it, or delete it. Abiding by the terms of use It's important that we take a few moments to read the terms of use policy so that we know what the community expects from us. Taken directly from Scratch's terms of use, the major points are: Be respectful Offer constructive comments Share and give credit Keep your personal information private Help keep the site friendly Creating projects under Creative Commons licenses Every work published on the Scratch website is shared under the Attribution-ShareAlike license. That doesn't mean you can surf the web and use copyrighted images in your work. Rather, the Creative Commons licensing ensures the collaboration objective of Scratch by making it easy for anyone to build upon what you do. When you look inside an existing project and begin to change it, the project keeps a remix tree, crediting the original sources of the work. A shout out to the original author in your projects would also be a nice way to give credit. For more information about the Creative Commons Attribution-ShareAlike license, visit http://creativecommons.org/licenses/by-sa/3.0/. Closely related to the licensing of Scratch projects is the understanding that you as a web user can not inherently browse the web, find media files, incorporate them into your project, and then share the project for everyone. Respect the copyrights of other people. To this end, the Scratch team enforces the Digital Millennium Copyright Act (DMCA), which protects the intellectual rights and copyrights of others. More information on this is available at http://scratch.mit.edu/DMCA. Finding free media online As we'll see throughout the book, Scratch provides libraries of media, including sounds and images that are freely available for use in our Scratch projects. However, we may find instances where we want to incorporate a broader range of media into our projects. A great search page to find free media files is http://search.creativecommons.org. Taking our first steps in Scratch From this point forward, we're going to be project editor agnostic, meaning you may choose to use the online project editor or the offline editor to work through the projects. When we encounter software that's unfamiliar to us, it's common to wonder, "Where do I begin?". The Scratch interface looks friendly enough, but the blank page can be a daunting thing to overcome. The rest of this article will be spent on building some introductory projects to get us comfortable with the project editor. If you're not already on the Scratch site, go to http://scratch.mit.edu and let's get started.
Read more
  • 0
  • 0
  • 3148
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-advanced-soql-statements
Packt
10 Apr 2014
4 min read
Save for later

Advanced SOQL Statements

Packt
10 Apr 2014
4 min read
(For more resources related to this topic, see here.) Relationship queries Relationship queries are mainly used to query the records from one or more objects in a single SOQL statement in Salesforce.com. We cannot query the records from more than one object without having a relationship between the objects. Filtering multiselect picklist values The INCLUDES and EXCLUDES operators are used to filter the multiselect picklist field. The multiselect picklist field in Salesforce allows the user to select more than one value from the list of values provided. Sorting in both the ascending and descending orders Sometimes, we may get a chance to sort the records when we fetch these using the SOQL statements based on two fields, one field in the ascending order and another field in the descending order. The following sample query will help us to achieve this easily: SELECT Name, Industry FROM Account ORDER By Name ASC, Industry DESC Using the preceding SOQL query, the accounts will first be sorted by Name in the ascending order and then by Industry in the descending order. The following screenshot shows the output of the SOQL execution: First, the records are arranged in the ascending order of the account's Name, and then it is sorted by Industry in the descending order. Using the GROUP BY ROLLUP clause The GROUP BY ROLLUP clause is used to add subtotals for aggregated data in query results. A query with a GROUP BY ROLLUP clause returns the same aggregated data as an equivalent query with a GROUP BY clause. It also returns multiple levels of subtotal rows. You can include up to three fields in a comma-separated list in a GROUP BY ROLLUP clause. Using the FOR REFERENCE clause The FOR REFERENCE clause is used to find the date/time when a record has been referenced. The LastReferencedDate field is updated for any retrieved records. The FOR REFERENCE clause is used to track the date/time when a record has been referenced last while executing a SOQL query. Using the FOR VIEW clause The FOR VIEW clause is used to find the date when a record has been last viewed. The LastViewedDate field is updated for any retrieved records. The FOR VIEW clause is used to track the date when the record was viewed last while executing a SOQL query. Using the GROUP BY CUBE clause The GROUP BY CUBE clause is used to add subtotals for every possible combination of the grouped field in the query results. The GROUP BY CUBE clause can be used with aggregate functions such as SUM() and COUNT(fieldName). A SOQL query with a GROUP BY CUBE clause retrieves the same aggregated records as an equivalent query with a GROUP BY clause. It also retrieves additional subtotal rows for each combination of fields specified in the comma-separated grouping list as well as the grand total. Using the OFFSET clause The OFFSET clause is used to specify the starting row number from which the records will be fetched. The OFFSET clause will be very useful when we implement pagination in the Visualforce page. The OFFSET clause along with Limits very useful in retrieving a subset of the records. The OFFSET usage in SOQL has many limitations and restrictions. Summary In this article, we saw how to query the records from more than one object using the relationship queries. The steps to get the relationship name among objects were also provided. Querying the records using both standard relationship and custom relationship was also discussed. Resources for Article: Further resources on this subject: Learning to Fly with Force.com [Article] Working with Home Page Components and Custom Links [Article] Salesforce CRM Functions [Article]
Read more
  • 0
  • 0
  • 4546

article-image-important-features-gitolite
Packt
08 Apr 2014
6 min read
Save for later

Important Features of Gitolite

Packt
08 Apr 2014
6 min read
(For more resources related to this topic, see here.) Access Control example with Gitolite We will see how simple Access Control can be with Gitolite. First, here's an example where the junior developers (let's call them Alice and Bob here) should be prevented from rewinding or deleting any branches, while the senior developers (Carol and David) are allowed to do so: Gitolite uses a plain text file to specify the configuration, and these access rules are placed in that file. repo foo   RW    =  alice bob   RW+   =  carol david You probably guessed that the RW stands for read and write. The + in the second rule stands for force, just as it does in the push command, and allows you to rewind or delete a branch. Now, suppose we want the junior developers to have some specific set of branches that they should be allowed to rewind or delete, a sort of "sandbox", if you will. The following command will help you to implement that: RW+  sandbox/  =  alice bob Alice and Bob can now push, rewind, or delete any branches whose names start with sandbox/. Access Control at the repository level is even easier, and you may even have guessed what that looks like: repo foo     RW+     =   alice     R       =   bob repo bar     RW+     =   bob     R       =   alice repo baz     RW+     =   carol     R       =   alice bob As you can see, you have three users with different access permissions for each of the three repositories. Doing this using the file systems' permissions mechanisms or POSIX ACLs would be doable, but quite cumbersome to set up and to audit/review. Sampling of Gitolite's power features The access control examples show the most commonly used feature of Gitolite, the repository and branch level access control, but of course Gitolite has many more features. In this article, we will briefly look at a few of them. Creating groups Gitolite allows you to create groups of users or repositories for convenience. Think back to Alice and Bob, our junior developers. Let's say you had several rules that Alice and Bob needed to be mentioned in. Clearly, this is too cumbersome; every time a new developer joined the team, you'd have to change all the rules to add him or her. Gitolite lets you do this by using the following command: @junior-devs    =  alice bob Later, it lets you do this by using the following command: repo foo   RW                       =  @junior-devs   RW+                      =  carol david   RW+  sandbox/            =  @junior-devs This allows you to add the junior developer in just one place at the top of the configuration file instead of potentially several places all over. More importantly, from the administrator's point of view, it serves as excellent documentation for the rules themselves; isn't it easier to reason about the rules when a descriptive group name is used rather than actual usernames? Personal branches Gitolite allows the administrator to give each developer a unique set of branches, called personal branches, that only he or she can create, push, or delete. This is a very convenient way to allow quick backups of work-in-progress branches, or share code for preliminary review. We saw how the sandbox area was defined:   RW+  sandbox/  =  alice bob However, this does nothing to prevent one junior developer from accidentally wiping out another's branches. For example, Alice could delete a branch called sandbox/bob/work that Bob may have pushed. You can use the special word USER as a directory name to solve this problem:   RW+  sandbox/USER/  =  alice bob This works as if you had specified each user individually, like this:   RW+  sandbox/alice/   =  alice   RW+  sandbox/bob/     =  bob Now, the set of branches that Alice is allowed to push is limited to those starting with sandbox/alice/, and she can no longer push or delete a branch called, say, sandbox/bob/work. Personal repositories With Gitolite, the administrator can choose to let the user create their own repositories, in addition to the ones that the administrator themselves creates. For this example, ignore the syntax and just focus on the functionality: repo dev/CREATOR/[a-z].*   C       =  @staff   RW+     =  CREATOR This allows members of the @staff group to create repositories whose names match the pattern supplied, which just means dev/<username>/<anything starting with a lowercase alphabetic character>. For example, a user called alice will be able to create repositories such as dev/alice/foo and dev/alice/bar. Gitolite and the Git control flow Conceptually, Gitolite is a very simple program. To see how it controls access to a Git repository, let us first look at how control flows from the client to the server in a normal git operation (say git fetch) when using plain ssh : When the user executes a git clone, fetch, or push, the Git client invokes ssh, passing it a command (either git-upload-pack or git-receive-pack, depending on whether the user is reading or writing). The local ssh client passes this to the server, and assuming authentication succeeds, that command gets executed on the server. With Gitolite installed, the ssh daemon does not invoke the git-upload-pack or git-receive-pack directly. Instead, it calls a program called gitolite-shell, which changes the control flow as follows: First, notice that nothing changes on the Git client side in anyway; the changes are only on the server side. In fact, unless an access violation happens and an error message needs to be sent to the user, the user may not even know that Gitolite is installed! Second, notice the red link from Gitolite's shell program to the git-upload-pack program. This call does not happen if Gitolite determines that the user does not have the appropriate access to the repo concerned. This access check happens for both read (that is, git fetch and git clone commands) and write (git push) operations; although for writes, there are more checks that happen later. Summary In this article, we learned about Access control with Gitolite. We also went through sampling of Gitolite's power features. We also covered the Git control flow. Resources for Article: Further resources on this subject: Parallel Dimensions – Branching with Git [Article] Using Gerrit with GitHub [Article] Issues and Wikis in GitLab [Article]
Read more
  • 0
  • 0
  • 2156

article-image-building-customizable-content-management-system
Packt
07 Apr 2014
15 min read
Save for later

Building a Customizable Content Management System

Packt
07 Apr 2014
15 min read
(For more resources related to this topic, see here.) Mission briefing This article deals with the creation of a Content Management System. This system will consist of two parts: A backend that helps to manage content, page parts, and page structure A frontend that displays the settings and content we just entered We will start this by creating an admin area and then create page parts with types. Page parts, which are like widgets, are fragments of content that can be moved around the page. Page parts also have types; for example, we can display videos in our left column or display news. So, the same content can be represented in multiple ways. For example, news can be a separate page as well as a page part if it needs to be displayed on the front page. These parts need to be enabled for the frontend. If enabled, then the frontend makes a call on the page part ID and renders it in the part where it is supposed to be displayed. We will do a frontend markup in Haml and Sass. The following screenshot shows what we aim to do in this article: Why is it awesome? Everyone loves to get a CMS built from scratch that is meant to suit their needs really closely. We will try to build a system that is extremely simple as well as covers several different types of content. This system is also meant to be extensible, and we will lay the foundation stone for a highly configurable CMS. We will also spice up our proceedings in this article by using MongoDB instead of a relational database such as MySQL. At the end of this article, we will be able to build a skeleton for a very dynamic CMS. Your Hotshot objectives While building this application, we will have to go through the following tasks: Creating a separate admin area Creating a CMS with the ability of handling different types of content pages Managing page parts Creating a Haml- and Sass-based template Generating the content and pages Implementing asset caching Mission checklist We need to install the following software on the system before we start with our mission: Ruby 1.9.3 / Ruby 2.0.0 Rails 4.0.0 MongoDB Bootstrap 3.0 Haml Sass Devise Git A tool for mockups jQuery ImageMagick and RMagick Memcached Creating a separate admin area We have used devise for all our projects and we will be using the same strategy in this article. The only difference is that we will use it to log in to the admin account and manage the site's data. This needs to be done when we navigate to the URL/admin. We will do this by creating a namespace and routing our controller through the namespace. We will use our default application layout and assets for the admin area, whereas we will create a different set of layout and assets altogether for our frontend. Also, before starting with this first step, create an admin role using CanCan and rolify and associate it with the user model. We are going to use memcached for caching, hence we need to add it to our development stack. We will do this by installing it through our favorite package manager, for example, apt on Ubuntu: sudo apt-get install memcached Prepare for lift off In order to start working on this article, we will have to first add the mongoid gem to Gemfile: Gemfile gem 'mongoid'4', github: 'mongoid/mongoid' Bundle the application and run the mongoid generator: rails g mongoid:config You can edit config/mongoid.yml to suit your local system's settings as shown in the following code: config/mongoid.yml development: database: helioscms_development hosts: - localhost:27017 options: test: sessions: default: database: helioscms_test hosts: - localhost:27017 options: read: primary max_retries: 1 retry_interval: 0 We did this because ActiveRecord is the default Object Relationship Mapper (ORM). We will override it with the mongoid Object Document Mapper (ODM) in our application. Mongoid's configuration file is slightly different from the database.yml file for ActiveRecord. The session's rule in mongoid.yml opens a session from the Rails application to MongoDB. It will keep the session open as long as the server is up. It will also open the connection automatically if the server is down and it restarts after some time. Also, as a part of the installation, we need to add Haml to Gemfile and bundle it: Gemfile gem 'haml' gem "haml-rails" Engage thrusters Let's get cracking to create our admin area now: We will first generate our dashboard controller: rails g controller dashboard indexcreate app/controllers/dashboard_controller.rbroute get "dashboard/index"invoke erbcreate app/views/dashboardcreate app/views/dashboard/index.html.erbinvoke test_unitcreate test/controllers/dashboard_controller_test.rbinvoke helpercreate app/helpers/dashboard_helper.rbinvoke test_unitcreate test/helpers/dashboard_helper_test.rbinvoke assetsinvoke coffeecreate app/assets/javascripts/dashboard.js.coffeeinvoke scsscreate app/assets/stylesheets/dashboard.css.scss We will then create a namespace called admin in our routes.rb file: config/routes.rbnamespace :admin doget '', to: 'dashboard#index', as: '/'end We have also modified our dashboard route such that it is set as the root page in the admin namespace. Our dashboard controller will not work anymore now. In order for it to work, we will have to create a folder called admin inside our controllers and modify our DashboardController to Admin::DashboardController. This is to match the admin namespace we created in the routes.rb file: app/controllers/admin/dashboard_controller.rbclass Admin::DashboardController < ApplicationControllerbefore_filter :authenticate_user!def indexendend In order to make the login specific to the admin dashboard, we will copy our devise/sessions_controller.rb file to the controllers/admin path and edit it. We will add the admin namespace and allow only the admin role to log in: app/controllers/admin/sessions_controller.rbclass Admin::SessionsController < ::Devise::SessionsControllerdef createuser = User.find_by_email(params[:email])if user && user.authenticate(params[:password]) &&user.has_role? "admin"session[:user_id] = user.idredirect_to admin_url, notice: "Logged in!"elseflash.now.alert = "Email or password is invalid /Only Admin is allowed "endendend redirect_to admin_url, notice: "Logged in!" else flash.now.alert = "Email or password is invalid / Only Admin is allowed " end end end Objective complete – mini debriefing In the preceding task, after setting up devise and CanCan in our application, we went ahead and created a namespace for the admin. In Rails, the namespace is a concept used to separate a set of controllers into a completely different functionality. In our case, we used this to separate out the login for the admin dashboard and a dashboard page as soon as the login happens. We did this by first creating the admin folder in our controllers. We then copied our Devise sessions controller into the admin folder. For Rails to identify the namespace, we need to add it before the controller name as follows: class Admin::SessionsController < ::Devise::SessionsController In our route, we defined a namespace to read the controllers under the admin folder: namespace :admin doend We then created a controller to handle dashboards and placed it within the admin namespace: namnamespace :admin doget '', to: 'dashboard#index', as: '/'end We made the dashboard the root page after login. The route generated from the preceding definition is localhost:3000/admin. We ensured that if someone tries to log in by clicking on the admin dashboard URL, our application checks whether the user has a role of admin or not. In order to do so, we used has_role from rolify along with user.authenticate from devise: if user && user.authenticate(params[:password]) && user.has_role? "admin" This will make devise function as part of the admin dashboard. If a user tries to log in, they will be presented with the devise login page as shown in the following screenshot: After logging in successfully, the user is redirected to the link for the admin dashboard: Creating a CMS with the ability to create different types of pages A website has a variety of types of pages, and each page serves a different purpose. Some are limited to contact details, while some contain detailed information about the team. Each of these pages has a title and body. Also, there will be subpages within each navigation; for example, the About page can have Team, Company, and Careers as subpages. Hence, we need to create a parent-child self-referential association. So, pages will be associated with themselves and be treated as parent and child. Engage thrusters In the following steps, we will create page management for our application. This will be the backbone of our application. Create a model, view, and controller for page. We will have a very simple page structure for now. We will create a page with title, body, and page type: app/models/page.rbclass Pageinclude Mongoid::Documentfield :title, type: Stringfield :body, type: Stringfield :page_type, type: Stringvalidates :title, :presence => truevalidates :body, :presence => truePAGE_TYPE= %w(Home News Video Contact Team Careers)end We need a home page for our main site. So, in order to set a home page, we will have to assign it the type home. However, we need two things from the home page: it should be the root of our main site and the layout should be different from the admin. In order to do this, we will start by creating an action called home_page in pages_controller: app/models/page.rb scope :home, ->where(page_type: "Home")} app/controllers/pages_controller.rb def home_page @page = Page.home.first rescue nil render :layout => 'page_layout' end We will find a page with the home type and render a custom layout called page_layout, which is different from our application layout. We will do the same for the show action as well, as we are only going to use show to display the pages in the frontend: app/controllers/pages_controller.rbdef showrender :layout => 'page_layout'end Now, in order to effectively manage the content, we need an editor. This will make things easier as the user will be able to style the content easily using it. We will use ckeditor in order to style the content in our application: Gemfilegem "ckeditor", :github => "galetahub/ckeditor"gem 'carrierwave', :github => "jnicklas/carrierwave"gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid'gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' Add the ckeditor gem to Gemfile and run bundle install: helioscms$ rails generate ckeditor:install --orm=mongoid--backend=carrierwavecreate config/initializers/ckeditor.rbroute mount Ckeditor::Engine => '/ckeditor'create app/models/ckeditor/asset.rbcreate app/models/ckeditor/picture.rbcreate app/models/ckeditor/attachment_file.rbcreate app/uploaders/ckeditor_attachment_file_uploader.rb This will generate a carrierwave uploader for CKEditor, which is compatible with mongoid. In order to finish the configuration, we need to add a line to application.js to load the ckeditor JavaScript: app/assets/application.js//= require ckeditor/init We will display the editor in the body as that's what we need to style: views/pages/_form.html.haml.field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} We also need to mount the ckeditor in our routes.rb file: config/routes.rbmount Ckeditor::Engine => '/ckeditor' The editor toolbar and text area will be generated as seen in the following screenshot: In order to display the content on the index page in a formatted manner, we will add the html_safe escape method to our body: views/pages/index.html.haml%td= page.body.html_safe The following screenshot shows the index page after the preceding step: At this point, we can manage the content using pages. However, in order to add nesting, we will have to create a parent-child structure for our pages. In order to do so, we will have to first generate a model to define this relationship: helioscms$ rails g model page_relationship Inside the page_relationship model, we will define a two-way association with the page model: app/models/page_relationship.rbclass PageRelationshipinclude Mongoid::Documentfield :parent_idd, type: Integerfield :child_id, type: Integerbelongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page"end In our page model, we will add inverse association. This is to check for both parent and child and span the tree both ways: has_many :child_page, :class_name => 'Page',:inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page',:inverse_of => :child_page We can now add a page to the form as a parent. Also, this method will create a tree structure and a parent-child relationship between the two pages: app/views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control").field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor =>{:uiColor => "#AADC6E", :toolbar => "mini"}%br/.actions= f.submit :class=>"btn btn-default"=link_to 'Cancel', pages_path, :class=>"btn btn-danger" We can see the the drop-down list with names of existing pages, as shown in the following screenshot: Finally, we will display the parent page: views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control") In order to display the parent, we will call it using the association we created: app/views/pages/index.html.haml- @pages.each do |page|%tr%td= page.title%td= page.body.html_safe%td= page.parent_page.title if page.parent_page Objective complete – mini debriefing Mongoid is an ODM that provides an ActiveRecord type interface to access and use MongoDB. MongoDB is a document-oriented database, which follows a no-schema and dynamic-querying approach. In order to include Mongoid, we need to make sure we have the following module included in our model: include Mongoid::Document Mongoid does not rely on migrations such as ActiveRecord because we do not need to create tables but documents. It also comes with a very different set of datatypes. It does not have a datatype called text; it relies on the string datatype for all such interactions. Some of the different datatypes are as follows: Regular expressions: This can be used as a query string, and matching strings are returned as a result Numbers: This includes integer, big integer, and float Arrays: MongoDB allows the storage of arrays and hashes in a document field Embedded documents: This has the same datatype as the parent document We also used Haml as our markup language for our views. The main goal of Haml is to provide a clean and readable markup. Not only that, Haml significantly reduces the effort of templating due to its approach. In this task, we created a page model and a controller. We added a field called page_type to our page. In order to set a home page, we created a scope to find the documents with the page type home: scope :home, ->where(page_type: "Home")} We then called this scope in our controller, and we also set a specific layout to our show page and home page. This is to separate the layout of our admin and pages. The website structure can contain multiple levels of nesting, which means we could have a page structure like the following: About Us | Team | Careers | Work Culture | Job Openings In the preceding structure, we were dealing with a page model to generate different pages. However, our CMS should know that About Us has a child page called Careers and in turn has another child page called Work Culture. In order to create a parent-child structure, we need to create a self-referential association. In order to achieve this, we created a new model that holds a reference on the same model page. We first created an association in the page model with itself. The line inverse_of allows us to trace back in case we need to span our tree according to the parent or child: has_many :child_page, :class_name => 'Page', :inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page', :inverse_of =>:child_page We created a page relationship to handle this relationship in order to map the parent ID and child ID. Again, we mapped it to the class page: belongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page" This allowed us to directly find parent and child pages using associations. In order to manage the content of the page, we added CKEditor, which provides a feature rich toolbar to format the content of the page. We used the CKEditor gem and generated the configuration, including carrierwave. For carrierwave to work with mongoid, we need to add dependencies to Gemfile: gem 'carrierwave', :github => "jnicklas/carrierwave" gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid' gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' MongoDB comes with its own filesystem called GridFs. When we extend carrierwave, we have an option of using a filesystem and GridFs, but the gem is required nonetheless. carrierwave and CKEditor are used to insert and manage pictures in the content wherever required. We then added a route to mount the CKEditor as an engine in our routes file. Finally, we called it in a form: = f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} CKEditor generates and saves the content as HTML. Rails sanitizes HTML by default and hence our HTML is safe to be saved. The admin page to manage the content of pages looks like the following screenshot:
Read more
  • 0
  • 0
  • 1751

article-image-making-entity-multiplayer-ready
Packt
07 Apr 2014
8 min read
Save for later

Making an entity multiplayer-ready

Packt
07 Apr 2014
8 min read
(For more resources related to this topic, see here.) Understanding the dataflow of Lua entities in a multiplayer environment When using your own Lua entities in a multiplayer environment, you need to make sure everything your entity does on one of the clients is also triggered on all other clients. Let's take a light switch as an example. If one of the players turned on the light switch, the switch should also be flipped on all other clients. Each client connected to the game has an instance of that light switch in their level. The CryENGINE network implementation already handles all the work involved in linking these individual instances together using network entity IDs. Each light switch can contact its own instances on all connected clients and call its functions over the network. All you need to do is use the functionality that is already there. One way of implementing the light switch functionality is to turn on the switch in the entity as soon as the OnUsed() event is triggered and then send a message to all other clients in the network to also turn on their lights. This might work for something as simple as a switch, but can soon get messy when the entity becomes more complex. Ping times and message orders can lead to inconsistencies if two players try to flip the light switch at the same time. The representation of the process would look like the following diagram: Not so good – the light switch entity could trigger its own switch on all network instances of itself Doing it this way, with the clients notifying each other, can cause many problems. In a more stable solution, these kinds of events are usually run through the server. The server entity—let's call it the master entity—determines the state of the entities across the network at all times and distributes the entities throughout the network. This could be visualized as shown in the following diagram: Better – the light switch entity calls the server that will distribute the event to all clients In the light switch scenario mentioned earlier, the light switch entity would send an event to the server light switch entity first. Then, the server entity would call each light switch entity, including the original sender, to turn on their lights. It is important to understand that the entity that received the event originally does nothing else but inform the server about the event. The actual light is not turned on until the server calls back to all entities with the request to do so. The aforementioned dataflow works in single player as well, as CryENGINE will just pretend that the local machine is both the client and the server. This way, you will not have to make adjustments or add extra code to your entity to check whether it is single player or multiplayer. In a multiplayer environment with a server and multiple clients, it is important to set the script up so that it acts properly and the correct functions are called on either the client or the server. The first step to achieve this is to add a client and server table to the entity script using the following code: Client = {}, Server = {}, With this addition, our script table looks like the following code snippet: Testy = {Properties={ fileModel = "",Physics = { bRigidBody=1, = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", }, } Now, we can go ahead and modify the functions so that they work properly in multiplayer. We do this by adding the Client and Server subtables to our script. This way, the network system will be able to identify the Client/Server functions on the entity. The Client/Server functions The Client/Server functions are defined within your entity script by using the respective subtables that we previously defined in the entity table. Let's update our script and add a simple function that outputs a debug text into the console on each client. In order for everything to work properly, we first need to update our OnInit() function and make sure it gets called on the server properly. Simply add a server subtable to the function so that it looks like the following code snippet: functionTesty.Server:OnInit() self:OnReset(); end; This way, our OnReset() function will still be called properly. Now, we can add a new function that outputs a debug text for us. Let's keep it simple and just make it output a console log using the CryENGINE Log function, as shown in the following code snippet: functionTesty.Client:PrintLogOutput(text) Log(text); end; This function will simply print some text into the CryENGINE console. Of course, you can add more sophisticated code at this point to be executed on the client. Please also note the Client subtable in the function definition that tells the engine that this is a client function. In the next step, we have to add a way to trigger this function so that we can test the behavior properly. There are many ways of doing this, but to keep things simple, we will simply use the OnHit() callback function that will be automatically triggered when the entity is hit by something; for example, a bullet. This way, we can test our script easily by just shooting at our entity. The OnHit() callback function is quite simple. All it needs to do in our case is to call our PrintLogOutput function, or rather request the server to call it. For this purpose, we add another function to be called on the server that calls our PrintLogOutput() function. Again, please note that we are using the Client subtable of the entity to catch the hit that happens on the client. Our two new functions should look as shown in the following code snippet: functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end We now have two new functions: one is a client function calling a server function and the other one is a server function calling the actual function on all the clients. The Remote Method Invocation definitions As a last step, before we are finished, we need to expose our entity and its functions to the network. We can do this by adding a table within the root of our entity script that defines the necessary Remote Method Invocation (RMI). The Net.Expose table will expose our entity and its functions to the network so that they can be called remotely, as shown in the following code snippet: Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; Each RMI is defined by providing a function name, a set of RMI flags, and additional parameters. The first RMI flag is an order flag and defines the order of the network packets. You can choose between the following options: UNRELIABLE_ORDERED RELIABLE_ORDERED RELIABLE_UNORDERED These flags tell the engine whether the order of the packets is important or not. The attachment flag will define at what time the RMI is attached during the serialization process of the network. This parameter can be either of the following flags: PREATTACH: This flag attaches the RMI before game data serialization. POSTATTACH: This flag attaches the RMI after game data serialization. NOATTACH: This flag is used when it is not important if the RMI is attached before or after the game data serialization. FAST: This flag performs an immediate transfer of the RMI without waiting for a frame update. This flag is very CPU intensive and should be avoided if possible. The Net.Expose table we just added defines which functions will be exposed on the client and the server and will give us access to the following three subtables: allClients otherClients server With these functions, we can now call functions either on the server or the clients. You can use the allClients subtable to call a function on all clients or the otherClients subtable to call it on all clients except the own client. At this point, the entity table of our script should look as follows: Testy = { Properties={ fileModel = "", Physics = { bRigidBody=1, bRigidBodyActive = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", ShowBounds = 1, }, } Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; This defines our entity and its network exposure. With our latest updates, the rest of our script with all its functions should look as follows: functionTesty.Server:OnInit() self:OnReset(); end; functionTesty:OnReset() local props=self.Properties; if(not EmptyString(props.fileModel))then self:LoadObject(0,props.fileModel); end; EntityCommon.PhysicalizeRigid(self,0,props.Physics,0); self:DrawSlot(0, 1); end; functionTesty:OnPropertyChange() self:OnReset(); end; functionTesty.Client:PrintLogOutput(text) Log(text); end; functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end With these functions added to our entity, everything should be ready to go and you can test the behavior in game mode. When the entity is being shot at, the OnHit() function will request the log output to be printed from the server. The server calls the actual function on all clients. Summary In this article we learned about making our entity ready for a multiplayer environment by understanding the dataflow of Lua entities, understanding the Client/Server functions, and by exposing our entities to the network using the Remote Method Invocation definitions. Resources for Article: Further resources on this subject: CryENGINE 3: Terrain Sculpting [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 10074
article-image-upgrading-samba-server-version-3
Packt
02 Apr 2014
3 min read
Save for later

Upgrading from Samba Server Version 3

Packt
02 Apr 2014
3 min read
(For more resources related to this topic, see here.) Distinguishing between Samba Versions 3 and 4 From the Samba Version 4 release notes made by the Samba project, we got information on the addition of the DNS server and NTP protocols that are integrated in the new Samba 4 code, LDAP server and Kerberos Key Distribution Center (KDC)—both accounted within the Active Directory Services, support for SMB Version 2.1 with preliminary support for Version 3.0, and the Python scripting interface—all of which we will highlight as great and bold, new capabilities. These new features can make the Samba Server Version 4 look appealing from an upgrade perspective for the Samba Server Version 3 users. It can also stimulate new installations, as it can be a strong choice to provide full network services as open source and as a free alternative in comparison to Microsoft Windows Servers. The classic model (NT4 Domains) is still supported and present in the Samba Server Version 4, but the new version's real gain for users and system administrators is the ability to use all the new features introduced by Microsoft with the development of the Active Directory Domain Controller services. All these are associated with the concepts of delegations, group policies, new security model, and so on. The fact is that the Samba Server Version 3 is a rock-solid software. The file and print's server code is very stable and has been working for many, many years. Besides this, the Samba Server Version 4 has implemented a new approach and daemons for these services; the new project's software version still has support for the old and bullet proof file/print server daemons and those are the ones that are recommended for production purposes at the time of this writing. Many users are really happy with the file and print services from Samba Server Version 3. As a great portion of the use cases and base installations of the Samba Server is for the purpose of these services, many users remain with the Version 3 in production, where the scary problem is to support the new Microsoft Windows Operating System versions. So, for the users who are looking at and exploring the upgrade process, the real difference and the main feature that encourages them to take the upgrade path most of the time, which are the Active Directory services present on Samba 4. The new code has integrated the DNS and LDAP server and KDC. So, many users from the previous versions that could be intimidated by the need to deal with external and complex software combinations (for example, DNS/Bind or OpenLDAP) for small and medium installations can now have a really robust and complete solution for the Samba project's new release.
Read more
  • 0
  • 0
  • 2689

article-image-drawing-anime-studio
Packt
31 Mar 2014
12 min read
Save for later

Drawing in Anime Studio

Packt
31 Mar 2014
12 min read
(For more resources related to this topic, see here.) Mouse versus tablet drawing If you're accustomed to drawing traditionally with a pen or pencil, you will discover quite quickly that drawing with a mouse requires a different skillset. The way a mouse moves, the difference in control, and the lack of intimacy can really take some time getting used to. While initially overwhelming, it is possible to map your mind towards mouse drawing. A graphic tablet is like a digital drawing pad that allows you to sketch on screen using a utensil that resembles a pen or pencil. What's nice is that Anime Studio was built to work with certain graphic tablets, thus making Plug and Play easy. We will be creating cartoon assets with a mouse. This is the most universal way as most users have this accessory for their computer. In addition, we cover both freehand and point drawing styles. We will be majorly using point drawing. Learning about Wacom tablets Wacom is a very well-known brand of graphic tablets which work well with Anime Studio. This is because Smith Micro Software teamed up with Wacom while building Anime Studio to deliver seamless compatibility. What's great about Wacom tablets is that they correspond to the amount of pressure you apply to your lines. For instance, if you apply a lot of pressure at the start of a line and then end the line with light pressure, you will see a difference in width just as you would with a real pen or pencil. This option can be turned off in Anime Studio, but most artists welcome it. If you're interested in tablet drawing, Wacom has many different tablets varying in size and features. You can visit www.wacom.com for more details. The following is the image of a Wacom tablet: Understanding the basics of vector and raster graphics Before we begin drawing in Anime Studio, it's important to understand the differences between vector and raster graphics. Anime Studio allows you to output both types of graphics, and each has its strengths and weaknesses. Vector drawings are created whenever you use a drawing tool in Anime Studio. This is also the main format for Adobe Flash, Toon Boom, and Adobe Illustrator. Vector format is a popular choice and has been dominating the Internet cartoon scene for several years. The following image is an example of a vector image. Notice how all the lines retain a sharp quality. Vector graphics tend to have smaller file sizes compared to equivalent raster graphics. This not only makes streaming embedded Shockwave Flash (SWF) easier, but also keeps your project files lower in size, thus freeing up more space on your hard drive and cutting down on load times. Raster or bitmap images are made up of pixels. Common file types include JPEG, BMP, PNG, and GIF. Basically, images you take with your camera, found on the Internet (at least the majority of them), or created in Adobe Photoshop are raster graphics. Raster graphics can be imported into Anime Studio and used for different functions. While they can contain great detail, raster graphics have many disadvantages when it comes to animation. As they are pixel-based, if you enlarge or zoom into a raster graphic past its original size, you will lose the image's quality. They also tend to bloat project file sizes up; this is due to the pixels needing more information to display the image. Many artists do find raster images worthwhile; additionally, you have the ability to convert raster images into vector graphics if desired. This method is called tracing, and while it can be useful, it's definitely not 100 percent effective, especially when trying to make the image work with animation. The following image is an example of a raster graphic. Compare it to the previous vector image. Note how the raster graphic appears blurry or pixelated in comparison. Now, you must be wondering which image type is the best. There is really no right or wrong answer to this question. It all comes down to personal preference and what you plan to do with your cartoon. We will explore a few uses of bitmap images, but the primary focus will be on creating vector art through the drawing tools. Exploring the Draw and Fill tools As we start working with the drawing tools in this article, it would be best for you to have a new document loaded up so that we have room to play around. In order to do that, navigate to File | New. New documents always open with a vector layer on the right-hand side Layers Panel, labeled Layer 1. This is perfect for us as all of the drawing tools require a vector layer to be used. Some drawing tools have features that can be adjusted at the top of the Anime Studio window. We will refer to this area as the top bar. The drawing tools are located on the left-hand side of your screen by default. The tools we will be looking at are divided into two panels: Draw and Fill. If you go in order while learning these tools, it may make sense, but we're simply too free-spirited for that. We will be going back and forth between these tools as some of them directly benefit the usage of others. Drawing shapes and lines with the Add Point tool The Add Point tool allows us to create lines and shapes using a series of points. All of Anime Studio's tools work with a point system, but this tool arguably gives you the most control in this regard. Points can then be moved or deleted depending on your needs. The following screenshot shows the location of the Add Point tool on the toolbar. As you can see, it looks like a curved line with a point at the end. You can also press the A key on your keyboard to select the tool. To get started, perform the following steps: Go to the top of your toolbar and click on the Add Point tool. Next, you will find a few options just below your File menu at the top of the Anime Studio program window. This is your top bar area. Please make sure Auto-Weld and Auto-Fill are both selected (this will be indicated by a check mark next to the corresponding option). Autowelding ensures that the two points we are joining will snap or weld together. Autofilling ensures that once two points are joined together to complete an enclosed object, the drawing will fill in with the colors from your Style palette. Try deselecting these options and redoing this exercise later on, to see what happens! On the right-hand side of your screen is the Style palette. Right below the title, you will see two colors, each labeled with Fill and Stroke. Click on the Fill color swatch and select a color of your choice from the options given. With the Color Picker window, you have the ability to click on a color, adjust the color range, modify transparency, as well as adjust your colors numerically for precise control. Once you have selected your color, click on the OK button. Now, select the Stroke color swatch and repeat the preceding steps. Try to pick a different color than that of the fill. The following screenshot shows the Style palette and Color Picker: Move your cursor somewhere on the blank canvas. Click and hold down the left button of your mouse, drag in any direction, and release. You should now see two points connected with a link. Now, we are simply seeing an outline, or reference for this object. No physical line has been created yet. Place your cursor on one of the two points. When correctly placed, your Add Point drawing tool will be highlighted in green. Now, click and hold down the left button of the mouse and drag anywhere to add to your line. If you keep the left button of the mouse pressed and move the point around, you should notice that the placement of this point affects the line curvature from the other two points. If you don't like this effect, you can always select the Sharp Corners option on the top of your window to create perfectly straight lines from point to point. Release the left button of the mouse once you've found a spot for your point. By repeating the preceding steps, you can continue to add interconnecting points to create an object; complex or simple, the choice is yours. If you desire, you can add points in between other established points by simply clicking on the line that interconnects them. To complete your object, you must overlap one point over another. Click the left button of your mouse, hold it, and drag the mouse to your first point. Once the area is highlighted in green, release the mouse button and notice how the object fills in with the colors you have selected from the Style palette. Have a look at the image in the following screenshot for an example: The Add Point tool offers a lot of control and is popular with mouse users. It may take some time to get used to, but if you prefer precision, practice will definitely pay off. This tool will be used quite a bit when we start drawing our assets. However, there are other tools that can get the job done, which we will be exploring momentarily. Freestyle drawing with the Freehand tool The Freehand tool allows us to draw in Anime Studio as if we were using a pen or pencil. This tool is a favorite amongst tablet users as it allows for absolute freedom of movement. It offers benefits for mouse users as well, especially if they plan to create a sense of stroke width variation. Just keep in mind, even though you can draw freely with this tool, you will still be creating points to make up your lines and objects, just like the Add Point tool. Just note that since Version 10, points will be hidden when using freehand drawing tools, to make the workspace less cluttered. In order to view and edit the points, you will need to select the Transform Points tool. The Freehand tool is the first tool in the second row (it looks like a pencil). You can also use the F key on your keyboard to select this tool. For your reference, you can see the location of this tool in the following screenshot: For this exercise, you can keep the document you created for the Add Point tool open. If you need more room to draw, feel free to create a new document. If you would like to save the current document to work on later, go to File and click on Save before creating a new document. Now, let's start drawing! The following steps will guide you on freestyle drawing with the Freehand tool: Click on the Freehand tool. At the top, where you have your tool options, be sure that Auto-Weld, Auto-Fill, and Auto-Stroke are checked. Before trying this tool out, let's check out some of the other options we can adjust with the Freehand tool. At the top, to the left-hand side of the Auto-Fill and Auto-Stroke settings, is a button labeled Freehand Options. Click on the button and a new panel will appear, as shown in the following screenshot: The Variable line width options allow you to change how the Freehand tool acts according to the pressure from your graphic tablet utensil. You can choose None, which will create a line with a consistent width; Use pen pressure, which detects how hard you are pressing on your tablet when drawing and adjusts the width accordingly (hard for thick, soft for light); or Random, which will randomize the line width as you place the points down. These options will work with a mouse, with the exception of the Use pen pressure setting. In the same panel, you can also adjust the percentage of variation of line width. The higher the percentage, the more dramatic a shift you will have for your line widths. Finally, you can dictate if you want your freehand lines to taper at the start and end. This can be useful, especially if you're using a mouse and want to simulate the freehand pressure-sensitive look. Once you have picked the appropriate options, let's start drawing! Place your cursor on the canvas, preferably outside of the other object you drew with the Add Point tool, hold down your left mouse button, and drag to create a line. You will notice that whichever settings you picked in the Freehand Options panel will be reflected in your line. Since we have selected Auto-Weld and Auto-Fill, we can automatically create closed objects. Try drawing an oval with the Freehand tool. Your beginning and end points should snap together, creating an enclosed and filled-in object. You can view an example of a line and shape with the Freehand tool in the following screenshot: If you are drawing with a tablet or are familiar with traditional drawing methods, the Freehand tool may be a better choice over the Add Point tool. As we start to draw characters and props, the Add Point tool will be referred to a lot. However, don't be afraid to use the Freehand tool in its place if that's what you're more comfortable with. You can always combine these tools too. The more options you have, the better!
Read more
  • 0
  • 0
  • 3415

article-image-installing-activiti
Packt
24 Mar 2014
4 min read
Save for later

Installing Activiti

Packt
24 Mar 2014
4 min read
(For more resources related to this topic, see here.) Getting started with Activiti BPM Let's take a quick tour of the Activiti components so you can get an idea of what the core modules are in the Activiti BPM that make it a lightweight and solid framework. You can refer to the following figure for an overview of the Activiti modules: In this figure, you can see that Activiti is divided into various modules. Activiti Modeler, Activiti Designer, and Activiti Kickstart are part of Modelling, and they are used to design your business process. Activiti Engine can be integrated with your application, and is placed at its center as a part of Runtime. To the right of Runtime, there are Activiti Explorer and Activiti Rest, which are part of Management and used in handling business processes. Let's see each component briefly to get an idea about it. Activiti Engine The Activiti Engine is a framework that is responsible for deploying the process definitions, starting the business process instance, and executing the tasks. The following are the important features of the Activiti Engine: Performs various tasks of a process engine Runs a BPMN 2 standard process It can be configured with JTA and Spring Easy to integrate with other technology Rock-solid engine Execution is very fast Easy to query history information Provides support for asynchronous execution It can be built with cloud for scalability Ability to test the process execution Provides support for event listeners, which can be used to add custom logic to the business process Using Activiti Engine APIs or the REST API, you can configure a process engine Workflow execution using services You can interact with Activiti using various available services. With the help of process engine services, you can interact with workflows using the available APIs. Objects of process engines and services are threadsafe, so you can place a reference to one of them to represent a whole server. In the preceding figure, you can see that the Process Engine is at the central point and can be instantiated using ProcessEngineConfiguration. The Process Engine provides the following services: Repository Service: This service is responsible for storing and retrieving our business process from the repository Runtime Service: Using this service, we can start our business process and fetch information about a process that is in execution Task Service: This service specifies the operations needed to manage human (standalone) tasks, such as the claiming, completing, and assigning of tasks Identity Service: This service is useful for managing users, groups, and the relationships between them Management Service: This service exposes engine, admin, and maintenance operations, which have no relation to the runtime execution of business processes History Service: This service provides services for getting information about ongoing and past process instances Form Service: This service provides access to form data and renders forms for starting new process instances and completing tasks Activiti Modeler The Activiti Modeler is an open source modeling tool provided by the KIS BPM process solution. Using the Activiti Modeler, you can manage your Activity Server and the deployments of business processes. It's a web-based tool for managing your Activiti projects. It also provides a web form editor, which helps you to design forms, make changes, and design business processes easily. Activiti Designer The Activiti Designer is used to add technical details to an imported business process model or the process created using the Activiti Modeler, which is only used to design business process workflows. The Activiti Designer can be used to graphically model, test, and deploy BPMN 2.0 processes. It also provides a feature to design processes, just as the Activiti Modeler does. It is mainly used by developers to add technical detail to business processes. The Activiti Designer is an IDE that can only be integrated with the Eclipse plugin. Activiti Explorer The Activiti Explorer is a web-based application that can be easily accessed by a non-technical person who can then run that business process. Apart from running the business process, it also provides an interface for process-instance management, task management, and user management, and also allows you to deploy business processes and to generate reports based on historical data. Activiti REST The Activiti REST provides a REST API to access the Activiti Engine. To access the Activiti REST API, we need to deploy activiti-rest.war to a servlet container, such as Apache Tomcat. You can configure Activiti in your own web application using the Activiti REST API. It uses the JSON format and is built upon Restlet. Activiti also provides a Java API. If you don't want to use the REST API, you can use the Java API.
Read more
  • 0
  • 0
  • 5376
article-image-getting-ready-your-first-biztalk-services-solution
Packt
24 Mar 2014
5 min read
Save for later

Getting Ready for Your First BizTalk Services Solution

Packt
24 Mar 2014
5 min read
(For more resources related to this topic, see here.) Deployment considerations You will need to consider the BizTalk Services edition required for your production use as well as the environment for test and/or staging purposes. This depends on decision points such as: Expected message load on the target system Capabilities that are required now versus 6 months down the line IT requirements around compliance, security, and DR The list of capabilities across different editions is outlined in the Windows Azure documentation page at http://www.windowsazure.com/en-us/documentation/articles/biztalk-editions-feature-chart. Note on BizTalk Services editions and signup BizTalk Services is currently available in four editions: Developer, Basic, Standard, and Premium, each with varying capabilities and prices. You can sign up for BizTalk Services from the Azure portal. The Developer SKU contains all features needed to try and evaluate without worrying about production readiness. We use the Developer edition for all examples. Provisioning BizTalk Services BizTalk Services deployment can be created using the Windows Azure Management Portal or using PowerShell. We will use the former in this example. Certificates and ACS Certificates are required for communication using SSL, and Access Control Service is used to secure the endpoints of the BizTalk Services deployment. First, you need to know whether you need a custom domain for the BizTalk Services deployment. In the case of test or developer deployments, the answer is mostly no. A BizTalk Services deployment will autogenerate a self-signed certificate with an expiry of close to 5 years. The ACS required for deployment will also be autocreated. Certificate and Access Control Service details are required for sending messages to bridges and agreements and can be retrieved from the Dashboard page post deployment. Storage requirements You need to create an Azure SQL database for tracking data. It is recommended to use the Business edition with the appropriate size; for test purposes, you can start with the 1 GB Web edition. You also need to pass the storage account credentials to archive message data. It is recommended that you create a new Azure SQL database and Storage account for use with BizTalk Services only. The BizTalk Services create wizard Now that we have the security and storage details figured out, let us create a BizTalk Services deployment from the Azure Management Portal: From the Management portal, navigate to New | App Services | BizTalk Service | Custom Create. Enter a unique name for the deployment, keeping the following values—EDITION: Developer, REGION: East US, TRACKING DATABASE: Create a new SQL Database instance. In the next page, retain the default database name, choose the SQL server, and enter the server login name and password. There can be six SQL server instances per Azure subscription. In the next page, choose the storage account for archiving and monitoring information. Deploy the solution. The BizTalk Services create wizard from Windows Azure Management Portal The deployment takes roughly 30 minutes to complete. After completion, you will see the status of the deployment as Active. Navigate to the deployment dashboard page; click on CONNECTION INFORMATION and note down the ACS credentials and download the deployment SSL certificate. The SSL certificate needs to be installed on the client machine where the Visual Studio SDK will be used. BizTalk portal registration We have one step remaining, and that is to configure the BizTalk Services Management portal to view agreements, bridges, and their tracking data. For this, perform the following steps: Click on Manage from the Dashboard screen. This will launch <mydeployment>.portal.biztalk.windows.net, where the BizTalk Portal is hosted. Some of the fields, such as the user's live ID and deployment name, will be auto-populated. Enter the ACS Issuer name and ACS Issuer secret noted in the previous step and click on Register. BizTalk Services Portal registration Creating your first BizTalk Services solution Let us put things into action and use the deployment created earlier to address a real-world multichannel sales scenario. Scenario description A trader, Northwind, manages an e-commerce website for online customer purchases. They also receive bulk orders from event firms and corporates for their goods. Northwind needs to develop a solution to validate an order and route the request to the right inventory location for delivery of the goods. The incoming request is an XML file with the order details. The request from event firms and corporates is over FTP, while e-commerce website requests are over HTTP. Post processing of the order, if the customer location is inside the US, then the request are forwarded to a relay service at a US address. For all other locations, the order needs to go to the central site and is sent to a Service Bus Queue at IntlAddress with the location as a promoted property. Prerequisites Before we start, we need to set up the client machine to connect to the deployment created earlier by performing the following steps: Install the certificate downloaded from the deployment on your client box to the trusted root store. This authenticates any SSL traffic that is between your client and the integration solution on Azure. Download and install the BizTalk Services SDK (https://go.microsoft.com/fwLink/?LinkID=313230) so the developer project experience lights up in Visual Studio 2012. Download the BizTalk Services EAI tools' Message Sender and Message Receiver samples from the MSDN Code Gallery available at http://code.msdn.microsoft.com/windowsazure. Realizing the solution We will break down the implementation details into defining the incoming format and creating the bridge, including transports to process incoming messages and the creation of the target endpoints, relay, and Service Bus Queue. Creating a BizTalk Services project You can create a new BizTalk Services project in Visual Studio 2012. BizTalk Services project in Visual Studio Summary This article discussed deployment considerations, provisioning BizTalk Services, BizTalk portal registration, and prerequisites for creating your first BizTalk Services solution. Resources for Article: Further resources on this subject: Using Azure BizTalk Features [Article] BizTalk Application: Dynamics AX Message Outflow [Article] Setting up a BizTalk Server Environment [Article]
Read more
  • 0
  • 0
  • 3818

article-image-organizing-jade-projects
Packt
24 Mar 2014
9 min read
Save for later

Organizing Jade Projects

Packt
24 Mar 2014
9 min read
(For more resources related to this topic, see here.) Now that you know how to use all the things that Jade can do, here's when you should use them. Jade is pretty flexible when it comes to organizing projects; the language itself doesn't impose much structure on your project. However, there are some conventions you should follow, as they will typically make your code easier to manage. This article will cover those conventions and best practices. General best practices Most of the good practices that are used when writing HTML carry over to Jade. Some of these include the following: Using a consistent naming convention for ID's, class names, and (in this case) mixin names and variables Adding alt text to images Choosing appropriate tags to describe content and page structure The list goes on, but these are all things you should already be familiar with. So now we're going to discuss some practices that are more Jade-specific. Keeping logic out of templates When working with a templating language, like Jade, that allows you to use advanced logical operations, separation of concerns (SoC) becomes an important practice. In this context, SoC is the separation of business and presentational logic, allowing each part to be developed and updated independently. An easy point to draw the border between business and presentation is where data is passed to the template. Business logic is kept in the main code of your application and passes the data to be presented (as well-formed JSON objects) to your template engine. From there, the presentation layer takes the data and performs whatever logic is needed to make that data into a readable web page. An additional advantage of this separation is that the JSON data can be passed to a template over stdio (to the server-side Jade compiler), or it can be passed over TCP/IP (to be evaluated client side). Since the template only formats the given data, it doesn't matter where it is rendered, and can be used on both server and client. For documenting the format of the JSON data, try JSON Schema (http://json-schema.org/). In addition to describing the interface between that your presentation layer uses, it can be used in tests to validate the structure of the JSON that your business layer produces. Inlining When writing HTML, it is commonly advised that you don't use inline styles or scripts because it is harder to maintain. This advice still applies to the way you write your Jade. For everything but the smallest one-page projects, tests, and mockups, you should separate your styles and scripts into different files. These files may then be compiled separately and linked to your HTML with style or link tags. Or, you could include them directly into the Jade. But either way, the important part is that you keep it separated from your markup in your source code. However, in your compiled HTML you don't need to worry about keeping inlined styles out. The advice about avoiding inline styles applies only to your source code and is purely for making your codebase easier to manage. In fact, according to Best Practices for Speeding Up Your Web Site (http://developer.yahoo.com/performance/rules.html) it is much better to combine your files to minimize HTTP requests, so inlining at compile time is a really good idea. It's also worth noting that, even though Jade can help you inline scripts and styles during compilation, there are better ways to perform these compile-time optimizations. For example, build-tools like AssetGraph (https://github.com/assetgraph/assetgraph) can do all the inlining, minifying, and combining you need, without you needing to put code to do so in your templates. Minification We can pass arguments through filters to compilers for things like minifying. This feature is useful for small projects for which you might not want to set up a full build-tool. Also, minification does reduce the size of your assets making it a very easy way to speed up your site. However, your markup shouldn't really concern itself with details like how the site is minified, so filter arguments aren't the best solution for minifying. Just like inlining, it is much better to do this with a tool like AssetGraph. That way your markup is free of "build instructions". Removing style-induced redundancy A lot of redundant markup is added just to make styling easier: we have wrappers for every conceivable part of the page, empty divs and spans, and plenty of other forms of useless markup. The best way to deal with this stuff is to improve your CSS so it isn't reliant on wrappers and the like. Failing that, we can still use mixins to take that redundancy out of the main part of our code and hide it away until we have better CSS to deal with it. For example, consider the following example that uses a repetitive navigation bar: input#home_nav(type='radio', name='nav', value='home', checked) label(for='home_nav') a(href='#home') home input#blog_nav(type='radio', name='nav', value='blog') label(for='blog_nav') a(href='#blog') blog input#portfolio_nav(type='radio', name='nav', value='portfolio') label(for='portfolio_nav') a(href='#portfolio') portfolio //- ...and so on Instead of using the preceding code, it can be refactored into a reusable mixin as shown in the following code snippet: mixin navbar(pages) - checked = true for page in pages input( type='radio', name='nav', value=page, id="#{page}_nav", checked=checked) label(for="#{page}_nav") a(href="##{page}") #{page} - checked = false The preceding mixin can be then called later in your markup using the following code: +navbar(['home', 'blog', 'portfolio']) Semantic divisions Sometimes, even though there is no redundancy present, dividing templates into separated mixins and blocks can be a good idea. Not only does it provide encapsulation (which makes debugging easier), but the division represents a logical separation of the different parts of a page. A common example of this would be dividing a page between the header, footer, sidebar, and main content. These could be combined into one monolithic file, but putting each in a separate block represents their separation, can make the project easier to navigate, and allows each to be extended individually. Server-side versus client-side rendering Since Jade can be used on both the client-side and server-side, we can choose to do the rendering of the templates off the server. However, there are costs and benefits associated with each approach, so the decision must be made depending on the project. Client-side rendering Using the Single Page Application (SPA) design, we can do everything but the compilation of the basic HTML structure on the client-side. This allows for a static page that loads content from a dynamic backend and passes that content to Jade templates compiled for client-side usage. For example, we could have simple webapp that, once loaded, fires off a AJAX request to a server running WordPress with a simple JSON API, and displays the posts it gets by passing the the JSON to templates. The benefits of this design is that the page itself is static (and therefore easily cacheable), with the SPA design, navigation is much faster (especially if content is preloaded), and significantly less data is transferred because of the terse JSON format that the content is formatted in (rather than it being already wrapped in HTML). Also, we get a very clean separation of content and presentation by actually forcing content to be moved into a CMS and out of the codebase. Finally, we avoid the risk of coupling the rendering too tightly with the CMS by forcing all content to be passed over HTTP in JSON—in fact, they are so separated that they don't even need to be on the same server. But, there are some issues too—the reliance on JavaScript for loading content means that users who don't have JS enabled will not be able to load content normally and search engines will not be able to see your content without implementing _escaped_fragment_ URLs. Thus, some fallback is needed, whether it is a full site that is able to function without JS or just simple HTML snapshots rendered using a headless browser, it is a source of additional work. Server-side rendering We can, of course, render everything on the server-side and just send regular HTML to the browser. This is the most backwards compatible, since the site will behave just as any static HTML site would, but we don't get any of the benefits of client-side rendering either. We could still use some client-side Jade for enhancements, but the idea is the same: the majority gets rendered on the server-side and full HTML pages need to be sent when the user navigates to a new page. Build systems Although the Jade compiler is fully capable of compiling projects on its own, in practice, it is often better to use a build system because they can make interfacing with the compiler easier. In addition, build systems often help automate other tasks such as minification, compiling other languages, and even deployment. Some examples of these build systems are Roots (http://roots.cx/), Grunt (http://gruntjs.com/), and even GNU's Make (http://www.gnu.org/software/make/). For example, Roots can recompile Jade automatically each time you save it and even refresh an in-browser preview of that page. Continuous recompilation helps you notice errors sooner and Roots helps you avoid the hassle of manually running a command to recompile. Summary In this article, we just finished taking a look at some of the best practices to follow when organizing Jade projects. Also, we looked at the use of third-party tools to automate tasks. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] RSS Web Widget [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 2134
Modal Close icon
Modal Close icon