Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-testing-webassembly-modules-with-jest-tutorial
Sugandha Lahoti
15 Oct 2018
7 min read
Save for later

Testing WebAssembly modules with Jest [Tutorial]

Sugandha Lahoti
15 Oct 2018
7 min read
WebAssembly (Wasm) represents an important stepping stone for the web platform. Enabling a developer to run compiled code on the web without a plugin or browser lock-in presents many new opportunities. This article is taken from the book Learn WebAssembly by Mike Rourke. This book will introduce you t powerful WebAssembly concepts that will help you write lean and powerful web applications with native performance. Well-tested code prevents regression bugs, simplifies refactoring, and alleviates some of the frustrations that go along with adding new features. Once you've compiled a WebAssembly module, you should write tests to ensure it's functioning as expected, even if you've written tests for C, C++, or Rust code you compiled it from. In this tutorial, we'll use Jest, a JavaScript testing framework, to test the functions in a compiled Wasm module. The code being tested All of the code used in this example is located on GitHub. The code and corresponding tests are very simple and are not representative of real-world applications, but they're intended to demonstrate how to use Jest for testing. The following code represents the file structure of the /testing-example folder: ├── /src | ├── /__tests__ | │ └── main.test.js | └── main.c ├── package.json └── package-lock.json The contents of the C file that we'll test, /src/main.c, is shown as follows: int addTwoNumbers(int leftValue, int rightValue) { return leftValue + rightValue; } float divideTwoNumbers(float leftValue, float rightValue) { return leftValue / rightValue; } double findFactorial(float value) { int i; double factorial = 1; for (i = 1; i <= value; i++) { factorial = factorial * i; } return factorial; } All three functions in the file are performing simple mathematical operations. The package.json file includes a script to compile the C file to a Wasm file for testing. Run the following command to compile the C file: npm run build There should be a file named main.wasm in the /src directory. Let's move on to describing the testing configuration step. Testing configuration The only dependency we'll use for this example is Jest, a JavaScript testing framework built by Facebook. Jest is an excellent choice for testing because it includes most of the features you'll need out of the box, such as coverage, assertions, and mocking. In most cases, you can use it with zero configuration, depending on the complexity of your application. If you're interested in learning more, check out Jest's website at https://jestjs.io. Open a terminal instance in the /chapter-09-node/testing-example folder and run the following command to install Jest: npm install In the package.json file, there are three entries in the scripts section: build, pretest, and test. The build script executes the emcc command with the required flags to compile /src/main.c to /src/main.wasm. The test script executes the jest command with the --verbose flag, which provides additional details for each of the test suites. The pretest script simply runs the build script to ensure /src/main.wasm exists prior to running any tests. Tests file review Let's walk through the test file, located at /src/__tests__/main.test.js, and review the purpose of each section of code. The first section of the test file instantiates the main.wasm file and assigns the result to the local wasmInstance variable: const fs = require('fs'); const path = require('path'); describe('main.wasm Tests', () => { let wasmInstance; beforeAll(async () => { const wasmPath = path.resolve(__dirname, '..', 'main.wasm'); const buffer = fs.readFileSync(wasmPath); const results = await WebAssembly.instantiate(buffer, { env: { memoryBase: 0, tableBase: 0, memory: new WebAssembly.Memory({ initial: 1024 }), table: new WebAssembly.Table({ initial: 16, element: 'anyfunc' }), abort: console.log } }); wasmInstance = results.instance.exports; }); ... Jest provides life-cycle methods to perform any setup or teardown actions prior to running tests. You can specify functions to run before or after all of the tests (beforeAll()/afterAll()), or before or after each test (beforeEach()/afterEach()). We need a compiled instance of the Wasm module from which we can call exported functions, so we put the instantiation code in the beforeAll() function. We're wrapping the entire test suite in a describe() block for the file. Jest uses a describe() function to encapsulate suites of related tests and test() or it() to represent a single test. Here's a simple example of this concept: const add = (a, b) => a + b; describe('the add function', () => { test('returns 6 when 4 and 2 are passed in', () => { const result = add(4, 2); expect(result).toEqual(6); }); test('returns 20 when 12 and 8 are passed in', () => { const result = add(12, 8); expect(result).toEqual(20); }); }); The next section of code contains all the test suites and tests for each exported function: ... describe('the _addTwoNumbers function', () => { test('returns 300 when 100 and 200 are passed in', () => { const result = wasmInstance._addTwoNumbers(100, 200); expect(result).toEqual(300); }); test('returns -20 when -10 and -10 are passed in', () => { const result = wasmInstance._addTwoNumbers(-10, -10); expect(result).toEqual(-20); }); }); describe('the _divideTwoNumbers function', () => { test.each([ [10, 100, 10], [-2, -10, 5], ])('returns %f when %f and %f are passed in', (expected, a, b) => { const result = wasmInstance._divideTwoNumbers(a, b); expect(result).toEqual(expected); }); test('returns ~3.77 when 20.75 and 5.5 are passed in', () => { const result = wasmInstance._divideTwoNumbers(20.75, 5.5); expect(result).toBeCloseTo(3.77, 2); }); }); describe('the _findFactorial function', () => { test.each([ [120, 5], [362880, 9.2], ])('returns %p when %p is passed in', (expected, input) => { const result = wasmInstance._findFactorial(input); expect(result).toEqual(expected); }); }); }); The first describe() block, for the _addTwoNumbers() function, has two test() instances to ensure that the function returns the sum of the two numbers passed in as arguments. The next two describe() blocks, for the _divideTwoNumbers() and _findFactorial() functions, use Jest's .each feature, which allows you to run the same test with different data. The expect() function allows you to make assertions on the value passed in as an argument. The .toBeCloseTo() assertion in the last _divideTwoNumbers() test checks whether the result is within two decimal places of 3.77. The rest use the .toEqual() assertion to check for equality. Writing tests with Jest is relatively simple, and running them is even easier! Let's try running our tests and reviewing some of the CLI flags that Jest provides. Running the wasm tests To run the tests, open a terminal instance in the /chapter-09-node/testing-example folder and run the following command: npm test You should see the following output in your terminal: main.wasm Tests the _addTwoNumbers function ✓ returns 300 when 100 and 200 are passed in (4ms) ✓ returns -20 when -10 and -10 are passed in the _divideTwoNumbers function ✓ returns 10 when 100 and 10 are passed in ✓ returns -2 when -10 and 5 are passed in (1ms) ✓ returns ~3.77 when 20.75 and 5.5 are passed in the _findFactorial function ✓ returns 120 when 5 is passed in (1ms) ✓ returns 362880 when 9.2 is passed in Test Suites: 1 passed, 1 total Tests: 7 passed, 7 total Snapshots: 0 total Time: 1.008s Ran all test suites. If you have a large number of tests, you could remove the --verbose flag from the test script in package.json and only pass the flag to the npm test command if needed. There are several other CLI flags you can pass to the jest command. The following list contains some of the more commonly used flags: --bail: Exits the test suite immediately upon the first failing test suite --coverage: Collects test coverage and displays it in the terminal after the tests have run --watch: Watches files for changes and reruns tests related to changed files You can pass these flags to the npm test command by adding them after a --. For example, if you wanted to use the --bail flag, you'd run this command: npm test -- --bail You can view the entire list of CLI options on the official site at https://jestjs.io/docs/en/cli. In this article, we saw how the Jest testing framework can be leveraged to test a compiled module in WebAssembly to ensure it's functioning correctly. To learn more about WebAssembly and its functionalities read the book, Learn WebAssembly. Blazor 0.6 release and what it means for WebAssembly Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux. Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 27057

article-image-implement-neural-network-single-layer-perceptron
Pravin Dhandre
28 Dec 2017
10 min read
Save for later

How to Implement a Neural Network with Single-Layer Perceptron

Pravin Dhandre
28 Dec 2017
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rodolfo Bonnin titled Machine Learning for Developers. This book is a systematic developer’s guide for various machine learning algorithms and techniques to develop more efficient and intelligent applications.[/box] In this article we help you go through a simple implementation of a neural network layer by modeling a binary function using basic python techniques. It is the first step in solving some of the complex machine learning problems using neural networks. Take a look at the following code snippet to implement a single function with a single-layer perceptron: import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from pprint import pprint %matplotlib inline from sklearn import datasets import matplotlib.pyplot as plt Defining and graphing transfer function types The learning properties of a neural network would not be very good with just the help of a univariate linear classifier. Even some mildly complex problems in machine learning involve multiple non-linear variables, so many variants were developed as replacements for the transfer functions of the perceptron. In order to represent non-linear models, a number of different non-linear functions can be used in the activation function. This implies changes in the way the neurons will react to changes in the input variables. In the following sections, we will define the main different transfer functions and define and represent them via code. In this section, we will start using some object-oriented programming (OOP) techniques from Python to represent entities of the problem domain. This will allow us to represent concepts in a much clearer way in the examples. Let's start by creating a TransferFunction class, which will contain the following two methods: getTransferFunction(x): This method will return an activation function determined by the class type getTransferFunctionDerivative(x): This method will clearly return its derivative For both functions, the input will be a NumPy array and the function will be applied element by element, as follows: >class TransferFunction: def getTransferFunction(x): raise NotImplementedError def getTransferFunctionDerivative(x): raise NotImplementedError Representing and understanding the transfer functions Let's take a look at the following code snippet to see how the transfer function works: def graphTransferFunction(function): x = np.arange(-2.0, 2.0, 0.01) plt.figure(figsize=(18,8)) ax=plt.subplot(121) ax.set_title(function.  name  ) plt.plot(x, function.getTransferFunction(x)) ax=plt.subplot(122) ax.set_title('Derivative of ' + function.  name  ) plt.plot(x, function.getTransferFunctionDerivative(x)) Sigmoid or logistic function A sigmoid or logistic function is the canonical activation function and is well-suited for calculating probabilities in classification properties. Firstly, let's prepare a function that will be used to graph all the transfer functions with their derivatives, from a common range of -2.0 to 2.0, which will allow us to see the main characteristics of them around the y axis. The classical formula for the sigmoid function is as follows: class Sigmoid(TransferFunction): #Squash 0,1 def getTransferFunction(x): return 1/(1+np.exp(-x)) def getTransferFunctionDerivative(x): return x*(1-x) graphTransferFunction(Sigmoid) Take a look at the following graph: Playing with the sigmoid Next, we will do an exercise to get an idea of how the sigmoid changes when multiplied by the weights and shifted by the bias to accommodate the final function towards its minimum. Let's then vary the possible parameters of a single sigmoid first and see it stretch and move: ws=np.arange(-1.0, 1.0, 0.2) bs=np.arange(-2.0, 2.0, 0.2) xs=np.arange(-4.0, 4.0, 0.1) plt.figure(figsize=(20,10)) ax=plt.subplot(121) for i in ws: plt.plot(xs, Sigmoid.getTransferFunction(i *xs),label= str(i)); ax.set_title('Sigmoid variants in w') plt.legend(loc='upper left'); ax=plt.subplot(122) for i in bs: plt.plot(xs, Sigmoid.getTransferFunction(i +xs),label= str(i)); ax.set_title('Sigmoid variants in b') plt.legend(loc='upper left'); Take a look at the following graph: Let's take a look at the following code snippet: class Tanh(TransferFunction): #Squash -1,1 def getTransferFunction(x): return np.tanh(x) def getTransferFunctionDerivative(x): return np.power(np.tanh(x),2) graphTransferFunction(Tanh) Lets take a look at the following graph: Rectified linear unit or ReLU ReLU is called a rectified linear unit, and one of its main advantages is that it is not affected by vanishing gradient problems, which generally consist of the first layers of a network tending to be values of zero, or a tiny epsilon: class Relu(TransferFunction): def getTransferFunction(x): return x * (x>0) def getTransferFunctionDerivative(x): return 1 * (x>0) graphTransferFunction(Relu) Let's take a look at the following graph: Linear transfer function Let's take a look at the following code snippet to understand the linear transfer function: class Linear(TransferFunction): def getTransferFunction(x): return x def getTransferFunctionDerivative(x): return np.ones(len(x)) graphTransferFunction(Linear) Let's take a look at the following graph: Defining loss functions for neural networks As with every model in machine learning, we will explore the possible functions that we will use to determine how well our predictions and classification went. The first type of distinction we will do is between the L1 and L2 error function types. L1, also known as least absolute deviations (LAD) or least absolute errors (LAE), has very interesting properties, and it simply consists of the absolute difference between the final result of the model and the expected one, as follows: L1 versus L2 properties Now it's time to do a head-to-head comparison between the two types of loss function:   Robustness: L1 is a more robust loss function, which can be expressed as the resistance of the function when being affected by outliers, which projects a quadratic function to very high values. Thus, in order to choose an L2 function, we should have very stringent data cleaning for it to be efficient. Stability: The stability property assesses how much the error curve jumps for a large error value. L1 is more unstable, especially for non-normalized datasets (because numbers in the [-1, 1] range diminish when squared). Solution uniqueness: As can be inferred by its quadratic nature, the L2 function ensures that we will have a unique answer for our search for a minimum. L2 always has a unique solution, but L1 can have many solutions, due to the fact that we can find many paths with minimal length for our models in the form of piecewise linear functions, compared to the single line distance in the case of L2. Regarding usage, the summation of the past properties allows us to use the L2 error type in normal cases, especially because of the solution uniqueness, which gives us the required certainty when starting to minimize error values. In the first example, we will start with a simpler L1 error function for educational purposes. Let's explore these two approaches by graphing the error results for a sample L1 and L2 loss error function. In the next simple example, we will show you the very different nature of the two errors. In the first two examples, we have normalized the input between -1 and 1 and then with values outside that range. As you can see, from samples 0 to 3, the quadratic error increases steadily and continuously, but with non-normalized data it can explode, especially with outliers, as shown in the following code snippet: sampley_=np.array([.1,.2,.3,-.4, -1, -3, 6, 3]) sampley=np.array([.2,-.2,.6,.10, 2, -1, 3, -1]) ax.set_title('Sigmoid variants in b') plt.figure(figsize=(10,10)) ax=plt.subplot() plt.plot(sampley_ - sampley, label='L1') plt.plot(np.power((sampley_ - sampley),2), label="L2") ax.set_title('L1 vs L2 initial comparison') plt.legend(loc='best') plt.show() Let's take a look at the following graph: Let's define the loss functions in the form of a LossFunction class and a getLoss method for the L1 and L2 loss function types, receiving two NumPy arrays as parameters, y_, or the estimated function value, and y, the expected value: class LossFunction: def getLoss(y_ , y ): raise NotImplementedError class L1(LossFunction): def getLoss(y_, y): return np.sum (y_ - y) class L2(LossFunction): def getLoss(y_, y): return np.sum (np.power((y_ - y),2)) Now it's time to define the goal function, which we will define as a simple Boolean. In order to allow faster convergence, it will have a direct relationship between the first input variable and the function's outcome: # input dataset X = np.array([ [0,0,1], [0,1,1], [1,0,1], [1,1,1] ]) # output dataset y = np.array([[0,0,1,1]]).T The first model we will use is a very minimal neural network with three cells and a weight for each one, without bias, in order to keep the model's complexity to a minimum: # initialize weights randomly with mean 0 W = 2*np.random.random((3,1)) - 1 print (W) Take a look at the following output generated by running the preceding code: [[ 0.52014909] [-0.25361738] [ 0.165037 ]] Then we will define a set of variables to collect the model's error, the weights, and training results progression: errorlist=np.empty(3); weighthistory=np.array(0) resultshistory=np.array(0) Then it's time to do the iterative error minimization. In this case, it will consist of feeding the whole true table 100 times via the weights and the neuron's transfer function, adjusting the weights in the direction of the error. Note that this model doesn't use a learning rate, so it should converge (or diverge) quickly: for iter in range(100): # forward propagation l0 = X l1 = Sigmoid.getTransferFunction(np.dot(l0,W)) resultshistory = np.append(resultshistory , l1) # Error calculation l1_error = y - l1 errorlist=np.append(errorlist, l1_error) # Back propagation 1: Get the deltas l1_delta = l1_error * Sigmoid.getTransferFunctionDerivative(l1) # update weights W += np.dot(l0.T,l1_delta) weighthistory=np.append(weighthistory,W) Let's simply review the last evaluation step by printing the output values at l1. Now we can see that we are reflecting quite literally the output of the original function: print (l1) Take a look at the following output, which is generated by running the preceding code: [[ 0.11510625] [ 0.08929355] [ 0.92890033] [ 0.90781468]] To better understand the process, let's have a look at how the parameters change over time. First, let's graph the neuron weights. As you can see, they go from a random state to accepting the whole values of the first column (which is always right), going to almost 0 for the second column (which is right 50% of the time), and then going to -2 for the third (mainly because it has to trigger 0 in the first two elements of the table): plt.figure(figsize=(20,20)) print (W) plt.imshow(np.reshape(weighthistory[1:],(-1,3))[:40], cmap=plt.cm.gray_r, interpolation='nearest'); Take a look at the following output, which is generated by running the preceding code: [[ 4.62194116] [-0.28222595] [-2.04618725]] Let's take a look at the following screenshot: Let's also review how our solutions evolved (during the first 40 iterations) until we reached the last iteration; we can clearly see the convergence to the ideal values: plt.figure(figsize=(20,20)) plt.imshow(np.reshape(resultshistory[1:], (-1,4))[:40], cmap=plt.cm.gray_r, interpolation='nearest'); Let's take a look at the following screenshot: We can see how the error evolves and tends to be zero through the different epochs. In this case, we can observe that it swings from negative to positive, which is possible because we first used an L1 error function: plt.figure(figsize=(10,10)) plt.plot(errorlist); Let's take a look at the following screenshot: The above explanation of implementing neural network using single-layer perceptron helps to create and play with the transfer function and also explore how accurate did the classification and prediction of the dataset took place. To know how classification is generally done on complex and large datasets, you may read our article on multi-layer perceptrons. To get hands on with advanced concepts and powerful tools for solving complex computational machine learning techniques, do check out this book Machine Learning for Developers and start building smart applications in your machine learning projects.      
Read more
  • 0
  • 0
  • 27029

article-image-best-practices-for-c-code-optimization-tutorial
Aaron Lazar
17 Aug 2018
9 min read
Save for later

Best practices for C# code optimization [Tutorial]

Aaron Lazar
17 Aug 2018
9 min read
There are many factors that negatively impact the performance of a .NET Core application. Sometimes these are minor things that were not considered earlier at the time of writing the code, and are not addressed by the accepted best practices. As a result, to solve these problems, programmers often resort to ad hoc solutions. However, when bad practices are combined together, they produce performance issues. It is always better to know the best practices that help developers write cleaner code and make the application performant. In this article, we will learn the following topics: Boxing and unboxing overhead String concatenation Exceptions handling for versus foreach Delegates This tutorial is an extract from the book, C# 7 and .NET Core 2.0 High Performance, authored by Ovais Mehboob Ahmed Khan. Boxing and unboxing overhead The boxing and unboxing methods are not always good to use and they negatively impact the performance of mission-critical applications. Boxing is a method of converting a value type to an object type, and is done implicitly, whereas unboxing is a method of converting an object type back to a value type and requires explicit casting. Let's go through an example where we have two methods executing a loop of 10 million records, and in each iteration, they are incrementing the counter by 1. The AvoidBoxingUnboxing method is using a primitive integer to initialize and increment it on each iteration, whereas the BoxingUnboxing method is boxing by assigning the numeric value to the object type first and then unboxing it on each iteration to convert it back to the integer type, as shown in the following code: private static void AvoidBoxingUnboxing() { Stopwatch watch = new Stopwatch(); watch.Start(); //Boxing int counter = 0; for (int i = 0; i < 1000000; i++) { //Unboxing counter = i + 1; } watch.Stop(); Console.WriteLine($"Time taken {watch.ElapsedMilliseconds}"); } private static void BoxingUnboxing() { Stopwatch watch = new Stopwatch(); watch.Start(); //Boxing object counter = 0; for (int i = 0; i < 1000000; i++) { //Unboxing counter = (int)i + 1; } watch.Stop(); Console.WriteLine($"Time taken {watch.ElapsedMilliseconds}"); } When we run both methods, we will clearly see the differences in performance. The BoxingUnboxing is executed seven times slower than the AvoidBoxingUnboxing method, as shown in the following screenshot: For mission-critical applications, it's always better to avoid boxing and unboxing. However, in .NET Core, we have many other types that internally use objects and perform boxing and unboxing. Most of the types under System.Collections and System.Collections.Specialized use objects and object arrays for internal storage, and when we store primitive types in these collections, they perform boxing and convert each primitive value to an object type, adding extra overhead and negatively impacting the performance of the application. Other types of System.Data, namely DateSet, DataTable, and DataRow, also use object arrays under the hood. Types under the System.Collections.Generic namespace or typed arrays are the best approaches to use when performance is the primary concern. For example, HashSet<T>, LinkedList<T>, and List<T> are all types of generic collections. For example, here is a program that stores the integer value in ArrayList: private static void AddValuesInArrayList() { Stopwatch watch = new Stopwatch(); watch.Start(); ArrayList arr = new ArrayList(); for (int i = 0; i < 1000000; i++) { arr.Add(i); } watch.Stop(); Console.WriteLine($"Total time taken is {watch.ElapsedMilliseconds}"); } Let's write another program that uses a generic list of the integer type: private static void AddValuesInGenericList() { Stopwatch watch = new Stopwatch(); watch.Start(); List<int> lst = new List<int>(); for (int i = 0; i < 1000000; i++) { lst.Add(i); } watch.Stop(); Console.WriteLine($"Total time taken is {watch.ElapsedMilliseconds}"); } When running both programs, the differences are pretty noticeable. The code with the generic list List<int> is over 10 times faster than the code with ArrayList. The result is as follows: String concatenation In .NET, strings are immutable objects. Two strings refer to the same memory on the heap until the string value is changed. If any of the string is changed, a new string is created on the heap and is allocated a new memory space. Immutable objects are generally thread safe and eliminate the race conditions between multiple threads. Any change in the string value creates and allocates a new object in memory and avoids producing conflicting scenarios with multiple threads. For example, let's initialize the string and assign the Hello World value to the  a string variable: String a = "Hello World"; Now, let's assign the  a string variable to another variable, b: String b = a; Both a and b point to the same value on the heap, as shown in the following diagram: Now, suppose we change the value of b to Hope this helps: b= "Hope this helps"; This will create another object on the heap, where a points to the same and b refers to the new memory space that contains the new text: With each change in the string, the object allocates a new memory space. In some cases, it may be an overkill scenario, where the frequency of string modification is higher and each modification is allocated a separate memory space, creates work for the garbage collector in collecting the unused objects and freeing up space. In such a scenario, it is highly recommended that you use the StringBuilder class. Exception handling Improper handling of exceptions also decreases the performance of an application. The following list contains some of the best practices in dealing with exceptions in .NET Core: Always use a specific exception type or a type that can catch the exception for the code you have written in the method. Using the Exception type for all cases is not a good practice. It is always a good practice to use try, catch, and finally block where the code can throw exceptions. The final block is usually used to clean up the resources, and returns a proper response that the calling code is expecting. In deeply nested code, don't use try catch block and handle it to the calling method or main method. Catching exceptions on multiple stacks slows down performance and is not recommended. Always use exceptions for fatal conditions that terminate the program. Using exceptions for noncritical conditions, such as converting the value to an integer or reading the value from an empty array, is not recommended and should be handled through custom logic. For example, converting a string value to the integer type can be done by using the Int32.Parse method rather than by using the Convert.ToInt32 method and then failing at a point when the string is not represented as a digit. While throwing an exception, add a meaningful message so that the user knows where that exception has actually occurred rather than going through the stack trace. For example, the following code shows a way of throwing an exception and adding a custom message based on the method and class being called: static string GetCountryDetails(Dictionary<string, string> countryDictionary, string key) { try { return countryDictionary[key]; } catch (KeyNotFoundException ex) { KeyNotFoundException argEx = new KeyNotFoundException(" Error occured while executing GetCountryDetails method. Cause: Key not found", ex); throw argEx; } } Throw exceptions rather than returning the custom messages or error codes and handle it in the main calling method. When logging exceptions, always check the inner exception and read the exception message or stack trace. It is helpful, and gives the actual point in the code where the error is thrown. For vs foreach For and foreach are two of the alternative ways of iterating over a list of items. Each of them operates in a different way. The for loop actually loads all the items of the list in memory first and then uses an indexer to iterate over each element, whereas foreach uses an enumerator and iterates until it reaches the end of the list. The following table shows the types of collections that are good to use for for and foreach: Type For/Foreach Typed array Good for both Array list Better with for Generic collections Better with for Delegates Delegates are a type in .NET which hold the reference to the method. The type is equivalent to the function pointer in C or C++. When defining a delegate, we can specify both the parameters that the method can take and its return type. This way, the reference methods will have the same signature. Here is a simple delegate that takes a string and returns an integer: delegate int Log(string n); Now, suppose we have a LogToConsole method that has the same signature as the one shown in the following code. This method takes the string and writes it to the console window: static int LogToConsole(string a) { Console.WriteLine(a); return 1; } We can initialize and use this delegate like this: Log logDelegate = LogToConsole; logDelegate ("This is a simple delegate call"); Suppose we have another method called LogToDatabase that writes the information in the database: static int LogToDatabase(string a) { Console.WriteLine(a); //Log to database return 1; } Here is the initialization of the new logDelegate instance that references the LogToDatabase method: Log logDelegateDatabase = LogToDatabase; logDelegateDatabase ("This is a simple delegate call"); The preceding delegate is the representation of unicast delegates, as each instance refers to a single method. On the other hand, we can also create multicast delegates by assigning  LogToDatabase to the same LogDelegate instance, as follows: Log logDelegate = LogToConsole; logDelegate += LogToDatabase; logDelegate("This is a simple delegate call"); The preceding code seems pretty straightforward and optimized, but under the hood, it has a huge performance overhead. In .NET, delegates are implemented by a MutlicastDelegate class that is optimized to run unicast delegates. It stores the reference of the method to the target property and calls the method directly. For multicast delegates, it uses the invocation list, which is a generic list, and holds the references to each method that is added. With multicast delegates, each target property holds the reference to the generic list that contains the method and executes in sequence. However, this adds an overhead for multicast delegates and takes more time to execute. If you liked this article and would like to learn more such techniques, grab this book, C# 7 and .NET Core 2.0 High Performance, authored by Ovais Mehboob Ahmed Khan. Behavior Scripting in C# and Javascript for game developers Exciting New Features in C# 8.0 Exploring Language Improvements in C# 7.2 and 7.3
Read more
  • 0
  • 1
  • 26997

article-image-python-3-8-new-features-the-walrus-operator-positional-only-parameters-and-much-more
Bhagyashree R
18 Jul 2019
5 min read
Save for later

Python 3.8 new features: the walrus operator, positional-only parameters, and much more

Bhagyashree R
18 Jul 2019
5 min read
Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. Ahead of the third beta release, which is scheduled for 29th July, we look at some of the key features coming to Python 3.8. The "incredibly controversial" walrus operator The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others were excited as the operator does make the code a tiny bit more readable. At the end of the PEP discussion, Guido van Rossum stepped down as BDFL (benevolent dictator for life) and the creation of a new governance model. In an interview with InfoWorld, Guido shared, “The straw that broke the camel’s back was a very contentious Python enhancement proposal, where after I had accepted it, people went to social media like Twitter and said things that really hurt me personally. And some of the people who said hurtful things were actually core Python developers, so I felt that I didn’t quite have the trust of the Python core developer team anymore.” According to PEP 572, the assignment expression is a syntactical operator that allows you to assign values to a variable as a part of an expression. Its aim is to simplify things like multiple-pattern matches and the so-called loop and a half. At PyCon 2019, Dustin Ingram, a PyPI maintainer, gave a few examples where you can use this syntax: Balancing lines of codes and complexity Avoiding inefficient comprehensions Avoiding unnecessary variables in scope You can watch the full talk on YouTube: https://www.youtube.com/watch?v=6uAvHOKofws The feature was implemented by Emily Morehouse, Python core developer and Founder, Director of Engineering at Cuttlesoft, and was merged earlier this year: https://twitter.com/emilyemorehouse/status/1088593522142339072 Explaining other improvements this feature brings, Jake Edge, a contributor on LWN.net wrote, “These and other uses (e.g. in list and dict comprehensions) help make the intent of the programmer clearer. It is a feature that many other languages have, but Python has, of course, gone without it for nearly 30 years at this point. In the end, it is actually a fairly small change for all of the uproars it caused.” Positional-only parameters Proposed in PEP 570, this introduces a new syntax (/) to specify positional-only parameters in Python function definitions. This is similar to how * indicates that the arguments to its right are keyword only. This syntax is already used by many CPython built-in and standard library functions, for instance, the pow() function: pow(x, y, z=None, /) This syntax gives library authors more control over better expressing the intended usage of an API and allows the API to “evolve in a safe, backward-compatible way.”  It gives library authors the flexibility to change the name of positional-only parameters without breaking callers. Additionally, this also ensures consistency of the Python language with existing documentation and the behavior of various  "builtin" and standard library functions. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. Debug support for f-strings Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. In Python 3.8, some additional syntax changes have been made by adding add (=) specifier and a !d conversion for ease of debugging. You can use this feature like this: print(f'{foo=} {bar=}') This provides developers a better way of doing “print-style debugging”, especially for those who have a background in languages that already have such feature such as  Perl, Ruby, JavaScript, etc. One developer expressed his delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Python Initialization Configuration Though Python is highly configurable, its configuration seems scattered all around the code.  The PEP 587 introduces a new C API to configure the Python Initialization giving developers finer control over the configuration and better error reporting. Among the improvements, this API will bring include ability to read and modify configuration before it is applied and overriding how Python computes the module search paths (``sys.path``). Along with these, there are many other exciting features coming to Python 3.8, which is currently scheduled for October, including a fast calling protocol for CPython, Vectorcall, support for out-of-band buffers in pickle protocol 5, and more. You can find the full list on Python’s official website. Python serious about diversity, dumps offensive ‘master’, ‘slave’ terms in its documentation Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust Python 3.8 beta 1 is now ready for you to test  
Read more
  • 0
  • 0
  • 26970

article-image-game-world
Packt
23 Feb 2016
39 min read
Save for later

The Game World

Packt
23 Feb 2016
39 min read
In this article, we will cover the basics of creating immersive areas where players can walk around and interact, as well as some of the techniques used to manage those areas. This article will give you some practical tips and tricks of the spritesheet system introduced with Unity 4.3 and how to get it to work for you. Lastly, we will also have a cursory look at how shaders work in the 2D world and the considerations you need to keep in mind when using them. However, we won't be implementing shaders as that could be another book in itself. The following is the list of topics that will be covered in this article: Working with environments Looking at sprite layers Handling multiple resolutions An overview of parallaxing and effects Shaders in 2D – an overview (For more resources related to this topic, see here.) Backgrounds and layers Now that we have our hero in play, it would be nice to give him a place to live and walk around, so let's set up the home town and decorate it. Firstly, we are going to need some more assets. So, from the asset pack you downloaded earlier, grab the following assets from the Environments pack, place them in the AssetsSpritesEnvironment folder, and name them as follows: Name the ENVIRONMENTS STEAMPUNKbackground01.png file Assets SpritesEnvironmentbackground01 Name the ENVIRONMENTSSTEAMPUNKenvironmentalAssets.png file AssetsSpritesEnvironmentenvironmentalAssets Name the ENVIRONMENTSFANTASYenvironmentalAssets.png file Assets SpritesEnvironmentenvironmentalAssets2 To slice or not to slice It is always better to pack many of the same images on to a single asset/atlas and then use the Sprite Editor to define the regions on that texture for each sprite, as long as all the sprites on that sheet are going to get used in the same scene. The reason for this is when Unity tries to draw to the screen, it needs to send the images to draw to the graphics card; if there are many images to send, this can take some time. If, however, it is just one image, it is a lot simpler and more performant with only one file to send. There needs to be a balance; too large an image and the upload to the graphics card can take up too many resources, too many individual images and you have the same problem. The basic rule of thumb is as follows: If the background is a full screen background or large image, then keep it separately. If you have many images and all are for the same scene, then put them into a spritesheet/atlas. If you have many images but all are for different scenes, then group them as best you can—common items on one sheet and scene-specific items on different sheets. You'll have several spritesheets to use. You basically want to keep as much stuff together as makes sense and not send unnecessary images that won't get used to the graphics card. Find your balance. The town background First, let's add a background for the town using the AssetsSpritesEnvironmentbackground01 texture. It is shown in the following screenshot: With the background asset, we don't need to do anything else other than ensure that it has been imported as a sprite (in case your project is still in 3D mode), as shown in the following screenshot: The town buildings For the steampunk environmental assets (AssetsSpritesEnvironmentenvironmentalAssets) that are shown in the following screenshot, we need a bit more work; once these assets are imported, change the Sprite Mode to Multiple and load up the Sprite Editor using the Sprite Editor button. Next, click on the Slice button, leave the settings at their default options, and then click on the Slice button in the new window as shown in the following screenshot: Click on Apply and close the Sprite Editor. You will have four new sprite textures available as seen in the following screenshot: The extra scenery We saw what happens when you use a grid type split on a spritesheet and when the automatic split works well, so what about when it doesn't go so well? If we look at the Fantasy environment pack (AssetsSpritesEnvironmentenvironmentalAssets2), we will see the following: After you have imported it and run the Split in Sprite Editor, you will notice that one of the sprites does not get detected very well; altering the automatic split settings in this case doesn't help, so we need to do some manual manipulation as shown in the following screenshot: In the previous screenshot, you can see that just two of the rocks in the top-right sprite have been identified by the splicing routine. To fix this, just delete one of the selections and then expand the other manually using the selection points in the corner of the selection box (after clicking on the sprite box). Here's how it will look before the correction: After correction, you should see something like the following screenshot: This gives us some nice additional assets to scatter around our towns and give it a more homely feel, as shown in the following screenshot: Building the scene So, now that we have some nice assets to build with, we can start building our first town. Adding the town background Returning to the scene view, you should see the following: If, however, we add our town background texture (AssetsSpritesBackgroundsBackground.png) to the scene by dragging it to either the project hierarchy or the scene view, you will end up with the following: Be sure to set the background texture position appropriately once you add it to the scene; in this case, be sure the position of the transform is centered in the view at X = 0, Y = 0, Z = 0. Unity does have a tendency to set the position relative to where your 3D view is at the time of adding it—almost never where you want it. Our player has vanished! The reason for this is simple: Unity's sprite system has an ordering system that comes in two parts. Sprite sorting layers Sorting Layers (Edit | Project Settings | Tags and Layers) are a collection of sprites, which are bulked together to form a single group. Layers can be configured to be drawn in a specific order on the screen as shown in the following screenshot: Sprite sorting order Sprites within an individual layer can be sorted, allowing you to control the draw order of sprites within that layer. The sprite Inspector is used for this purpose, as shown in the following screenshot: Sprite's Sorting Layers should not be confused with Unity's rendering layers. Layers are a separate functionality used to control whether groups of game objects are drawn or managed together, whereas Sorting Layers control the draw order of sprites in a scene. So the reason our player is no longer seen is that it is behind the background. As they are both in the same layer and have the same sort order, they are simply drawn in the order that they are in the project hierarchy. Updating the scene Sorting Layers To resolve the update of the scene's Sorting Layers, let's organize our sprite rendering by adding some sprite Sorting Layers. So, open up the Tags and Layers inspector pane as shown in the following screenshot (by navigating to Edit | Project settings | Tags and Layers), and add the following Sorting Layers: Background Player Foreground GUI You can reorder the layers underneath the default anytime by selecting a row and dragging it up and down the sprite's Sorting Layers list. With the layers set up, we can now configure our game objects accordingly. So, set the Sorting Layer on our background01 sprite to the Background layer as shown in the following screenshot: Then, update the PlayerSprite layer to Player; our character will now be displayed in front of the background. You can just keep both objects on the same layer and set the Sort Order value appropriately, keeping the background to a Sort Order value of 0 and the player to 10, which will draw the player in front. However, as you add more items to the scene, things will get tricky quickly, so it is better to group them in a layer accordingly. Now when we return to the scene, our hero is happily displayed but he is seen hovering in the middle of our village. So let's fix that next by simply changing its position transform in the Inspector window. Setting the Y position transform to -2 will place our hero nicely in the middle of the street (provided you have set the pivot for the player sprite to bottom), as shown in the following screenshot: Feel free at this point to also add some more background elements such as trees and buildings to fill out the scene using the environment assets we imported earlier. Working with the camera If you try and move the player left and right at the moment, our hero happily bobs along. However, you will quickly notice that we run into a problem: the hero soon disappears from the edge of the screen. To solve this, we need to make the camera follow the hero. When creating new scripts to implement something, remember that just about every game that has been made with Unity has most likely implemented either the same thing or something similar. Most just get on with it, but others and the Unity team themselves are keen to share their scripts to solve these challenges. So in most cases, we will have something to work from. Don't just start a script from scratch (unless it is a very small one to solve a tiny issue) if you can help it; here's some resources to get you started: Unity sample projects: http://Unity3d.com/learn/tutorials/projects Unity Patterns: http://unitypatterns.com/ Unity wiki scripts section: http://wiki.Unity3d.com/index.php/Scripts (also check other stuff for detail) Once you become more experienced, it is better to just use these scripts as a reference and try to create your own and improve on them, unless they are from a maintained library such as https://github.com/nickgravelyn/UnityToolbag. To make the camera follow the players, we'll take the script from the Unity 2D sample and modify it to fit in our game. This script is nice because it also includes a Mario style buffer zone, which allows the players to move without moving the camera until they reach the edge of the screen. Create a new script called FollowCamera in the AssetsScripts folder, remove the Start and Update functions, and then add the following properties: using UnityEngine;   public class FollowCamera : MonoBehavior {     // Distance in the x axis the player can move before the   // camera follows.   public float xMargin = 1.5f;     // Distance in the y axis the player can move before the   // camera follows.   public float yMargin = 1.5f;     // How smoothly the camera catches up with its target   // movement in the x axis.   public float xSmooth = 1.5f;     // How smoothly the camera catches up with its target   // movement in the y axis.   public float ySmooth = 1.5f;     // The maximum x and y coordinates the camera can have.   public Vector2 maxXAndY;     // The minimum x and y coordinates the camera can have.   public Vector2 minXAndY;     // Reference to  the player's transform.   public Transform player; } The variables are all commented to explain their purpose, but we'll cover each as we use them. First off, we need to get the player object's position so that we can track the camera to it by discovering it from the object it is attached to. This is done by adding the following code in the Awake function: void Awake()     {         // Setting up the reference.         player = GameObject.Find("Player").transform;   if (player == null)   {     Debug.LogError("Player object not found");   }       } An alternative to discovering the player this way is to make the player property public and then assign it in the editor. There is no right or wrong way—just your preference. It is also a good practice to add some element of debugging to let you know if there is a problem in the scene with a missing reference, else all you will see are errors such as object not initialized or variable was null. Next, we need a couple of helper methods to check whether the player has moved near the edge of the camera's bounds as defined by the Max X and Y variables. In the following code, we will use the settings defined in the preceding code to control how close you can get to the end result:   bool CheckXMargin()     {         // Returns true if the distance between the camera and the   // player in the x axis is greater than the x margin.         return Mathf.Abs (transform.position.x - player.position.x) > xMargin;     }       bool CheckYMargin()     {         // Returns true if the distance between the camera and the   // player in the y axis is greater than the y margin.         return Mathf.Abs (transform.position.y - player.position.y) > yMargin;     } To finish this script, we need to check each frame when the scene is drawn to see whether the player is close to the edge and update the camera's position accordingly. Also, we need to check if the camera bounds have reached the edge of the screen and not move it beyond. Comparing Update, FixedUpdate, and LateUpdate There is usually a lot of debate about which update method should be used within a Unity game. To put it simply, the FixedUpdate method is called on a regular basis throughout the lifetime of the game and is generally used for physics and time sensitive code. The Update method, however, is only called after the end of each frame that is drawn to the screen, as the time taken to draw the screen can vary (due to the number of objects to be drawn and so on). So, the Update call ends up being fairly irregular. For more detail on the difference between Update and FixedUpdate see the Unity Learn tutorial video at http://unity3d.com/learn/tutorials/modules/beginner/scripting/update-and-fixedupdate. As the player is being moved by the physics system, it is better to update the camera in the FixedUpdate method: void FixedUpdate()     {         // By default the target x and y coordinates of the camera         // are it's current x and y coordinates.         float targetX = transform.position.x;         float targetY = transform.position.y;           // If the player has moved beyond the x margin...         if (CheckXMargin())             // the target x coordinate should be a Lerp between             // the camera's current x position and the player's  // current x position.             targetX = Mathf.Lerp(transform.position.x,  player.position.x, xSmooth * Time.fixedDeltaTime );           // If the player has moved beyond the y margin...         if (CheckYMargin())             // the target y coordinate should be a Lerp between             // the camera's current y position and the player's             // current y position.             targetY = Mathf.Lerp(transform.position.y,  player.position.y, ySmooth * Time. fixedDeltaTime );           // The target x and y coordinates should not be larger         // than the maximum or smaller than the minimum.         targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);         targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);           // Set the camera's position to the target position with         // the same z component.         transform.position =          new Vector3(targetX, targetY, transform.position.z);     } As they say, every game is different and how the camera acts can be different for every game. In a lot of cases, the camera should be updated in the LateUpdate method after all drawing, updating, and physics are complete. This, however, can be a double-edged sword if you rely on math calculations that are affected in the FixedUpdate method, such as Lerp. It all comes down to tweaking your camera system to work the way you need it to do. Once the script is saved, just attach it to the Main Camera element by dragging the script to it or by adding a script component to the camera and selecting the script. Finally, we just need to configure the script and the camera to fit our game size as follows: Set the orthographic Size of the camera to 2.7 and the Min X and Max X sizes to 5 and -5 respectively. The perils of resolution When dealing with cameras, there is always one thing that will trip us up as soon as we try to build for another platform—resolution. By default, the Unity player in the editor runs in the Free Aspect mode as shown in the following screenshot: The Aspect mode (from the Aspect drop-down) can be changed to represent the resolutions supported by each platform you can target. The following is what you get when you switch your build target to each platform: To change the build target, go into your project's Build Settings by navigating to File | Build Settings or by pressing Ctrl + Shift + B, then select a platform and click on the Switch Platform button. This is shown in the following screenshot: When you change the Aspect drop-down to view in one of these resolutions, you will notice how the aspect ratio for what is drawn to the screen changes by either stretching or compressing the visible area. If you run the editor player in full screen by clicking on the Maximize on Play button () and then clicking on the play icon, you will see this change more clearly. Alternatively, you can run your project on a target device to see the proper perspective output. The reason I bring this up here is that if you used fixed bounds settings for your camera or game objects, then these values may not work for every resolution, thereby putting your settings out of range or (in most cases) too undersized. You can handle this by altering the settings for each build or using compiler predirectives such as #if UNITY_METRO to force the default depending on the build (in this example, Windows 8). To read more about compiler predirectives, check the Unity documentation at http://docs.unity3d.com/Manual/PlatformDependentCompilation.html. A better FollowCamera script If you are only targeting one device/resolution or your background scrolls indefinitely, then the preceding manual approach works fine. However, if you want it to be a little more dynamic, then we need to know what resolution we are working in and how much space our character has to travel. We will perform the following steps to do this: We will change the min and max variables to private as we no longer need to configure them in the Inspector window. The code is as follows:   // The maximum x and y coordinates the camera can have.     private Vector2 maxXAndY;       // The minimum x and y coordinates the camera can have.     private Vector2 minXAndY; To work out how much space is available in our town, we need to interrogate the rendering size of our background sprite. So, in the Awake function, we add the following lines of code: // Get the bounds for the background texture - world       size     var backgroundBounds = GameObject.Find("background")      .renderer.bounds; In the Awake function, we work out our resolution and viewable space by interrogating the ViewPort method on the camera and converting it to the same coordinate type as the sprite. This is done using the following code:   // Get the viewable bounds of the camera in world     // coordinates     var camTopLeft = camera.ViewportToWorldPoint      (new Vector3(0, 0, 0));     var camBottomRight = camera.ViewportToWorldPoint      (new Vector3(1, 1, 0)); Finally, in the Awake function, we update the min and max values using the texture size and camera real-world bounds. This is done using the following lines of code: // Automatically set the min and max values     minXAndY.x = backgroundBounds.min.x - camTopLeft.x;     maxXAndY.x = backgroundBounds.max.x - camBottomRight.x; In the end, it is up to your specific implementation for the type of game you are making to decide which pattern works for your game. Transitioning and bounds So our camera follows our player, but our hero can still walk off the screen and keep going forever, so let us stop that from happening. Towns with borders As you saw in the preceding section, you can use Unity's camera logic to figure out where things are on the screen. You can also do more complex ray testing to check where things are, but I find these are overly complex unless you depend on that level of interaction. The simpler answer is just to use the native Box2D physics system to keep things in the scene. This might seem like overkill, but the 2D physics system is very fast and fluid, and it is simple to use. Once we add the physics components, Rigidbody 2D (to apply physics) and a Box Collider 2D (to detect collisions) to the player, we can make use of these components straight away by adding some additional collision objects to stop the player running off. To do this and to keep things organized, we will add three empty game objects (either by navigating to GameObject | Create Empty, or by pressing Ctrl + Shift +N) to the scene (one parent and two children) to manage these collision points, as shown in the following screenshot: I've named them WorldBounds (parent) and LeftBorder and RightBorder (children) for reference. Next, we will position each of the child game objects to the left- and right-hand side of the screen, as shown in the following screenshot: Next, we will add a Box Collider 2D to each border game object and increase its height just to ensure that it works for the entire height of the scene. I've set the Y value to 5 for effect, as shown in the following screenshot: The end result should look like the following screenshot with the two new colliders highlighted in green: Alternatively, you could have just created one of the children, added the box collider, duplicated it (by navigating to Edit | Duplicate or by pressing Ctrl + D), and moved it. If you have to create multiples of the same thing, this is a handy tip to remember. If you run the project now, then our hero can no longer escape this town on his own. However, as we want to let him leave, we can add a script to the new Boundary game object so that when the hero reaches the end of the town, he can leave. Journeying onwards Now that we have collision zones on our town's borders, we can hook into this by using a script to activate when the hero approaches. Create a new C# script called NavigationPrompt, clear its contents, and populate it with the following code: using UnityEngine;   public class NavigationPrompt : MonoBehavior {     bool showDialog;     void OnCollisionEnter2D(Collision2D col)   {     showDialog = true;   }     void OnCollisionExit2D(Collision2D col)   {     showDialog = false;   } } The preceding code gives us the framework of a collision detection script that sets a flag on and off if the character interacts with what the script is attached to, provided it has a physics collision component. Without it, this script would do nothing and it won't cause an error. Next, we will do something with the flag and display some GUI when the flag is set. So, add the following extra function to the preceding script: void OnGUI()     {       if (showDialog)       {         //layout start         GUI.BeginGroup(new Rect(Screen.width / 2 - 150, 50, 300,           250));           //the menu background box         GUI.Box(new Rect(0, 0, 300, 250), "");           // Information text         GUI.Label(new Rect(15, 10, 300, 68), "Do you want to           travel?");           //Player wants to leave this location         if (GUI.Button(new Rect(55, 100, 180, 40), "Travel"))         {           showDialog = false;             // The following line is commented out for now           // as we have nowhere to go :D           //Application.LoadLevel(1);}           //Player wants to stay at this location         if (GUI.Button(new Rect(55, 150, 180, 40), "Stay"))         {           showDialog = false;         }           //layout end         GUI.EndGroup();       }     } The function itself is very simple and only activates if the showDialog flag is set to true by the collision detection. Then, we will perform the following steps: In the OnGUI method, we set up a dialog window region with some text and two buttons. One button asks if the player wants to travel, which would load the next area (commented out for now as we only have one scene), and close the dialog. One button simply closes the dialog if the hero didn't actually want to leave. As we haven't stopped moving the player, the player can also do this by moving away. If you now add the NavigationPrompt script to the two world border (LeftBorder and RightBorder) game objects, this will result in the following simple UI whenever the player collides with the edges of our world: We can further enhance this by tagging or naming our borders to indicate a destination. I prefer tagging, as it does not interfere with how my scene looks in the project hierarchy; also, I can control what tags are available and prevent accidental mistyping. To tag a game object, simply select a Tag using the drop-down list in the Inspector when you select the game object in the scene or project. This is shown in the following screenshot: If you haven't set up your tags yet or just wish to add a new one, select Add Tag in the drop-down menu; this will open up the Tags and Layers window of Inspector. Alternatively, you can call up this window by navigating to Edit | Project Settings | Tags and layers in the menu. It is shown in the following screenshot: You can only edit or change user-defined tags. There are several other tags that are system defined. You can use these as well; you just cannot change, remove, or edit them. These include Player, Respawn, Finish, Editor Only, Main Camera, and GameController. As you can see from the preceding screenshot, I have entered two new tags called The Cave and The World, which are the two main exit points from our town. Unity also adds an extra item to the arrays in the editor. This helps you when you want to add more items; it's annoying when you want a fixed size but it is meant to help. When the project runs, however, the correct count of items will be exposed. Once these are set up, just return to the Inspector for the two borders, and set the right one to The World and the left to The Cave. Now, I was quite specific in how I named these tags, as you can now reuse these tags in the script to both aid navigation and also to notify the player where they are going. To do this, simply update the Do you want to travel to line to the following: //Information text GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel to " +   this.tag + "?"); Here, we have simply appended the dialog as it is presented to the user with the name of the destination we set in the tag. Now, we'll get a more personal message, as shown in the following screenshot: Planning for the larger picture Now for small games, the preceding implementation is fine; however, if you are planning a larger world with a large number of interactions, provide complex decisions to prevent the player continuing unless they are ready. As the following diagram shows, there are several paths the player can take and in some cases, these is only one way. Now, we could just build up the logic for each of these individually as shown in the screenshot, but it is better if we build a separate navigation system so that we have everything in one place; it's just easier to manage that way. This separation is a fundamental part of any good game design. Keeping the logic and game functionality separate makes it easier to maintain in the future, especially when you need to take internationalization into account (but we will learn more about that later). Now, we'll change to using a manager to handle all the world/scene transitions, and simplify the tag names we use as they won't need to be displayed. So, The Cave will be renamed as just Cave, and we will get the text to display from the navigation manager instead of the tag. So, by separating out the core decision making functionality out of the prompt script, we can build the core manager for navigation. Its primary job is to maintain where a character can travel and information about that destination. First, we'll update the tags we created earlier to simpler identities that we can use in our navigation manager (update The Cave to Cave01 and The World to World). Next, we'll create a new C# script called NavigationManager in our AssetsScripts folder, and then replace its contents with the following lines of code: public static class NavigationManager {       public static Dictionary<string,string> RouteInformation =     new Dictionary<string,string>()   {     { "World", "The big bad world"},     { "Cave", "The deep dark cave"},   };     public static string GetRouteInfo(string destination)   {     return RouteInformation.ContainsKey(destination) ?     RouteInformation[destination] : null;   }     public static bool CanNavigate(string destination)   {     return true;   }     public static void NavigateTo(string destination)   {     // The following line is commented out for now     // as we have nowhere to go :D     //Application.LoadLevel(destination);   } } Notice the ? and : operators in the following statement: RouteInformation.ContainsKey(destination) ?   RouteInformation[destination] : null; These operators are C# conditional operators. They are effectively the shorthand of the following: if(RouteInformation.ContainsKey(destination)) {   return RouteInformation[destination]; } else {   return null; } Shorter, neater, and much nicer, don't you think? For more information, see the MSDN C# page at http://bit.ly/csharpconditionaloperator. The script is very basic for now, but contains several following key elements that can be expanded to meet the design goals of your game: RouteInformation: This is a list of all the possible destinations in the game in a dictionary. A static list of possible destinations in the game, and it is a core part of the manager as it knows everywhere you can travel in the game in one place. GetRouteInfo: This is a basic information extraction function. A simple controlled function to interrogate the destination list. In this example, we just return the text to be displayed in the prompt, which allows for more detailed descriptions that we could use in tags. You could use this to provide alternate prompts depending on what the player is carrying and whether they have a lit torch, for example. CanNavigate: This is a test to see if navigation is possible. If you are going to limit a player's travel, you need a way to test if they can move, allowing logic in your game to make alternate choices if the player cannot. You could use a different system for this by placing some sort of block in front of a destination to limit choice (as used in the likes of Zelda), such as an NPC or rock. As this is only an example, we can always travel and add logic to control it if you wish. NavigateTo: This is a function to instigate navigation. Once a player can travel, you can control exactly what happens in the game: does navigation cause the next scene to load straight away (as in the script currently), or does the current scene fade out and then a traveling screen is shown before fading the next level in? Granted, this does nothing at present as we have nowhere to travel to. The script you will notice is different to the other scripts used so far, as it is a static class. This means it sits in the background, only exists once in the game, and is accessible from anywhere. This pattern is useful for fixed information that isn't attached to anything; it just sits in the background waiting to be queried. Later, we will cover more advanced types and classes to provide more complicated scenarios. With this class in place, we just need to update our previous script (and the tags) to make use of this new manager. Update the NavigationPrompt script as follows: Update the collision function to only show the prompt if we can travel. The code is as follows: void OnCollisionEnter2D(Collision2D col) {   //Only allow the player to travel if allowed   if (NavigationManager.CanNavigate(this.tag))   {     showDialog = true;   } } When the dialog shows, display the more detailed destination text provided by the manager for the intended destination. The code is as follows: //Dialog detail - updated to get better detail GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel   to " + NavigationManager.GetRouteInfo(this.tag) + "?"); If the player wants to travel, let the manager start the travel process. The code is as follows: //Player wants to leave this location if (GUI.Button(new Rect(55, 100, 180, 40), "Travel")) {   showDialog = false;   NavigationManager.NavigateTo(this.tag); } The functionality I've shown here is very basic and it is intended to make you think about how you would need to implement it for your game. With so many possibilities available, I could fill several articles on this kind of subject alone. Backgrounds and active elements A slightly more advanced option when building game worlds is to add a level of immersive depth to the scene. Having a static image to show the village looks good, especially when you start adding houses and NPCs to the mix; but to really make it shine, you should layer the background and add additional active elements to liven it up. We won't add them to the sample project at this time, but it is worth experimenting with in your own projects (or try adding it to this one)—it is a worthwhile effect to look into. Parallaxing If we look at the 2D sample provided by Unity, the background is split into several panes—each layered on top of one another and each moving at a different speed when the player moves around. There are also other elements such as clouds, birds, buses, and taxes driving/flying around, as shown in the following screenshot: Implementing these effects is very easy technically. You just need to have the art assets available. There are several scripts in the wiki I described earlier, but the one in Unity's own 2D sample is the best I've seen. To see the script, just download the Unity Projects: 2D Platformer asset from https://www.assetstore.unity3d.com/en/#!/content/11228, and check out the BackgroundParallax script in the AssetsScripts folder. The BackgroundParallax script in the platformer sample implements the following: An array of background images, which is layered correctly in the scene (which is why the script does not just discover the background sprites) A scaling factor to control how much the background moves in relation to the camera target, for example, the camera A reducing factor to offset how much each layer moves so that they all don't move as one (or else what is the point, might as well be a single image) A smoothing factor so that each background moves smoothly with the target and doesn't jump around Implementing this same model in your game would be fairly simple provided you have texture assets that could support it. Just replicate the structure used in the platformer 2D sample and add the script. Remember to update the FollowCamera script to be able to update the base background, however, to ensure that it can still discover the size of the main area. Foreground objects The other thing you can do to liven up your game is to add random foreground objects that float across your scene independently. These don't collide with anything and aren't anything to do with the game itself. They are just eye candy to make your game look awesome. The process to add these is also fairly simple, but it requires some more advanced Unity features such as coroutines, which we are not going to cover here. So, we will come back to these later. In short, if you examine the BackgroundPropSpawner.cs script from the preceding Unity platformer 2D sample, you will have to perform the following steps: Create/instantiate an object to spawn. Set a random position and direction for the object to travel. Update the object over its lifetime. Once it's out of the scene, destroy or hide it. Wait for a time, and then start again. This allows them to run on their own without impacting the gameplay itself and just adds that extra bit of depth. In some cases, I've seen particle effects are also used to add effect, but they are used sparingly. Shaders and 2D Believe it or not, all 2D elements (even in their default state) are drawn using a shader—albeit a specially written shader designed to light and draw the sprite in a very specific way. If you look at the player sprite in the inspector, you will see that it uses a special Material called Sprites-Default, as shown in the following screenshot: This section is purely meant to highlight all the shading options you have in the 2D system. Shaders have not changed much in this update except for the addition of some 2D global lighting found in the default sprite shader. For more detail on shaders in general, I suggest a dedicated Unity shader book such as https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook. Clicking on the button next to Material field will bring up the material selector, which also shows the two other built-in default materials, as shown in the following screenshot: However, selecting either of these will render your sprite invisible as they require a texture and lighting to work; they won't inherit from the Sprite Renderer texture. You can override this by creating your own material and assigning alternate sprite style shaders. To create a new material, just select the AssetsMaterials folder (this is not crucial, but it means we create the material in a sensible place in our project folder structure) and then right click on and select Create | Material. Alternatively, do the same using the project view's Edit... menu option, as shown in the following screenshot: This gives us a basic default Diffuse shader, which is fine for basic 3D objects. However, we also have two default sprite rendering shaders available. Selecting the shader dropdown gives us the screen shown in the following screenshot: Now, these shaders have the following two very specific purposes: Default: This shader inherits its texture from the Sprite Renderer texture to draw the sprite as is. This is a very basic functionality—just enough to draw the sprite. (It contains its own static lighting.) Diffuse: This shader is the same as the Default shader; it inherits the texture of Default, but it requires an external light source as it does not contain any lighting—this has to be applied separately. It is a slightly more advanced shader, which includes offsets and other functions. Creating one of these materials and applying it to the Sprite Renderer texture of a sprite will override its default constrained behavior. This opens up some additional shader options in the Inspector, as shown in the following screenshot: These options include the following: Sprite Texture: Although changing the Tiling and Offset values causes a warning to appear, they still display a function (even though the actual displayed value resets). Tint: This option allows changing the default light tint of the rendered sprite. It is useful to create different colored objects from the same sprite. Pixel snap: This option makes the rendered sprite crisper but narrows the drawn area. It is a trial and error feature (see the following sections for more information). Achieving pixel perfection in your game in Unity can be a challenge due to the number of factors that can affect it, such as the camera view size, whether the image texture is a Power Of Two (POT) size, and the import setting for the image. This is basically a trial and error game until you are happy with the intended result. If you are feeling adventurous, you can extend these default shaders (although this is out of the scope of this article). The full code for these shaders can be found at http://Unity3d.com/unity/download/archive. If you are writing your own shaders though, be sure to add some lighting to the scene; otherwise, they are just going to appear dark and unlit. Only the default sprite shader is automatically lit by Unity. Alternatively, you can use the default sprite shader as a base to create your new custom shader and retain the 2D basic lighting. Another worthy tip is to check out the latest version of the Unity samples (beta) pack. In it, they have added logic to have two sets of shaders in your project: one for mobile and one for desktop, and a script that will swap them out at runtime depending on the platform. This is very cool; check out on the asset store at https://www.assetstore.unity3d.com/#/content/14474 and the full review of the pack at http://darkgenesis.zenithmoon.com/unity3dsamplesbeta-anoverview/. Going further If you are the adventurous sort, try expanding your project to add the following: Add some buildings to the town Set up some entry points for a building and work that into your navigation system, for example, a shop Add some rocks to the scene and color each differently using a manual material, maybe even add a script to randomly set the pixel color in the shader instead of creating several materials Add a new scene for the cave using another environment background, and get the player to travel between them Summary This certainly has been a very busy article just to add a background to our scene, but working out how each scene will work is a crucial design element for the entire game; you have to pick a pattern that works for you and your end result once as changing it can be very detrimental (and a lot of work) in the future. In this article, we covered the following topics: Some more practice with the Sprite Editor and sprite slicer including some tips and tricks when it doesn't work (or you want to do it yourself) Some camera tips, tricks, and scripts An overview of sprite layers and sprite sorting Defining boundaries in scenes Scene navigation management and planning levels in your game Some basics of how shaders work for 2D For learning Unity 2D from basic you can refer to https://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Resources for Article:   Further resources on this subject: Build a First Person Shooter [article] Let's Get Physical – Using GameMaker's Physics System [article] Using the Tiled map editor [article]
Read more
  • 0
  • 0
  • 26947

article-image-why-should-you-consider-becoming-aws-developer-associate-certified
Savia Lobo
12 Dec 2019
5 min read
Save for later

Why should you consider becoming ‘AWS Developer Associate’ certified?

Savia Lobo
12 Dec 2019
5 min read
Organizations both large and small are looking to automating their day-to-day processes and the best option they consider is moving to the cloud. However, they also fear certain challenges that can make cloud adoption difficult. The biggest challenge is the lack of resources or expertise to understand how different cloud services function or how they are built, to leverage its advantages to the fullest. Many developers use cloud computing services--either through the companies they work with or simply subscribe to it--without really knowing the intricacies. Their knowledge of how the internal processes work remains limited. Certifications, can, in fact, help you understand how cloud functions and what goes on within these gigantic data holders. To start with, enroll yourself into a basic certification by any of the popular cloud service providers. Once you know the basics, you can go ahead to master the other certifications available based on your job role or career aspirations. Why choose an AWS certification Amazon Web Services (AWS) is considered one of the top cloud services providers in the cloud computing market currently. According to Gartner’s Magic Quadrant 2019, AWS continues to lead in public cloud adoption. AWS also offers eleven certifications that include foundational and specialty cloud computing topics. If you are a developer or a professional who wants to pursue a career in Cloud computing, you should consider taking the ‘AWS Certified Developer - Associate’ certification. Do you wish to learn from the AWS subject-matter experts, explore real-world scenarios, and pass the AWS Certified Developer – Associate exam? We recommend you to explore the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. Many organizations use AWS services and being certified can open various options for improved learning. Along with being popular among companies, AWS includes a host of cloud service options compared to other cloud service providers. While having a hands-on experience holds great value for developers, getting certified by one of the most popular cloud services will only have greater advantages for their better future. Starting with web developers, database admins, IoT or an AI developer, etc., AWS includes various certification options that delve into almost every aspect of technology. It is also constantly adding more offerings and innovating in a way that keeps one updated with cutting-edge technologies. Getting an AWS certification is definitely a difficult task but you do not have to quit your current job for this one. Unlike other vendors, Amazon offers a realistic certification path that does not require highly specialized (and expensive) training to start. AWS certifications validate a candidate’s familiarity and knowledge of best practices in cloud architecture, management, and security. Prerequisites for this certification The AWS Developer Associate certification will help you enhance your skills impacting your career growth. However, one needs to keep certain prerequisites in mind. A developer should have: Attended the AWS Essentials course or should have an equivalent experience Knowledge in developing applications with API interfaces Basic understanding of relational and non-relational databases. How the AWS Certified Developer - Associate level certification course helps a developer AWS Certified Developer Associate certification training will give you hands-on exposure to core AWS services through guided lectures, videos, labs and quizzes. You'll get trained in compute and storage fundamentals, architecture and security best practices that are relevant to the AWS certified developer exam. This associate-level course will help developers identify the appropriate AWS architecture and also learn to design, develop, and deploy optimum AWS cloud solutions. If one already has some existing knowledge of AWS, this course will help them identify and deploy secure procedures for optimal cloud deployment and maintenance. Developers will also learn to develop and maintain applications written for Amazon S3, DynamoDB, SQS, SNS, SWS, AWS Elastic Beanstalk, and AWS CloudFormation. After achieving this certification, you will be an asset to any organization. You can help them leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud. This indirectly means a rise in your annual income and also career growth. However, getting certified alone is not enough, other factors such as skills, experience, geographic location, etc. are also important. This certification will help you become competent in using Amazon’s cloud services. This course is a part of the first tier (Associate level) of certifications that AWS offers. You could further improve your cloud computing skills by taking up certifications from the professional tier and later from the specialty tiers, whatever suits you the best. New AWS services and features are added every year. Certification alone is not enough, staying relevant is the key. To continually demonstrate expertise and knowledge of best practices for the most up to date AWS services, certification holders are required to re-certify every two years. You can either choose to take a professional-level exam for the same certification or pass the re-certification exam for your existing certification. To further gain valuable insights on how to design, develop, and deploy cloud-based solutions using AWS and also get familiar with Identity and Access Management (IAM) along with Virtual private cloud (VPC), you can check out the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. How do AWS developers manage Web apps? Why AWS is the preferred cloud platform for developers working with big data How do you become a developer advocate?
Read more
  • 0
  • 0
  • 26938
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-enabling-spring-faces-support
Packt
28 Oct 2009
9 min read
Save for later

Enabling Spring Faces support

Packt
28 Oct 2009
9 min read
The main focus of the Spring Web Flow Framework is to deliver the infrastructure to describe the page flow of a web application. The flow itself is a very important element of a web application, because it describes its structure, particularly the structure of the implemented business use cases. But besides the flow which is only in the background, the user of your application is interested in the Graphical User Interface (GUI). Therefore, we need a solution of how to provide a rich user interface to the users. One framework which offers components is JavaServer Faces (JSF). With the release of Spring Web Flow 2, an integration module to connect these two technologies, called Spring Faces has been introduced. This article is no introduction to the JavaServer Faces technology. It is only a description about the integration of Spring Web Flow 2 with JSF. If you have never previously worked with JSF, please refer to the JSF reference to gain knowledge about the essential concepts of JavaServer Faces. JavaServer Faces (JSF)—a brief introductionThe JavaServer Faces (JSF) technology is a web application framework with the goal to make the development of user interfaces for a web application (based on Java EE) easier. JSF uses a component-based approach with an own lifecycle model, instead of a request-driven approach used by traditional MVC web frameworks. The version 1.0 of JSF is specified inside JSR (Java Specification Request) 127 (http://jcp.org/en/jsr/detail?id=127). To use the Spring Faces module, you have to add some configuration to your application. The diagram below depicts the single configuration blocks. These blocks are described in this article. The first step in the configuration is to configure the JSF framework itself. That is done in the deployment descriptor of the web application—web.xml. The servlet has to be loaded at the startup of the application. This is done with the <load-on-startup>1</load-on-startup> element. <!-- Initialization of the JSF implementation. The Servlet is not used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> For the work with the JavaServer Faces, there are two important classes. These are the javax.faces.webapp.FacesServlet and the javax.faces.context.FacesContext classes.You can think of FacesServlet as the core base of each JSF application. Sometimes that servlet is called an infrastructure servlet. It is important to mention that each JSF application in one web container has its own instance of the FacesServlet class. This means that an infrastructure servlet cannot be shared between many web applications on the same JEE web container.FacesContext is the data container which encapsulates all information that is necessary around the current request.For the usage of Spring Faces, it is important to know that FacesServlet is only used to instantiate the framework. A further usage inside Spring Faces is not done. To be able to use the components from Spring Faces library, it's required to use Facelets instead of JSP. Therefore, we have to configure that mechanism. If you are interested in reading more about the Facelets technology, visit the Facelets homepage from java.net with the following URL: https://facelets.dev.java.net. A good introduction inside the Facelets technology is the http://www.ibm.com/developerworks/java/library/j-facelets/ article, too. The configuration process is done inside the deployment descriptor of your web application—web.xml. The following sample shows the configuration inside the mentioned file. <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value></context-param> As you can see in the above code, the configuration parameter is done with a context parameter. The name of the parameter is javax.faces.DEFAULT_SUFFIX. The value for that context parameter is .xhtml. Inside the Facelets technology To present the separate views inside a JSF context, you need a specific view handler technology. One of those technologies is the well-known JavaServer Pages (JSP) technology. Facelets are an alternative for the JSP inside the JSF context. Instead, to define the views in JSP syntax, you will use XML. The pages are created using XHTML. The Facelets technology offers the following features: A template mechanism, similar to the mechanism which is known from the Tiles framework The composition of components based on other components Custom logic tags Expression functions With the Facelets technology, it's possible to use HTML for your pages. Therefore, it's easy to create the pages and view them directly in a browser, because you don't need an application server between the processes of designing a page The possibility to create libraries of your components The following sample shows a sample XHTML page which uses the component aliasing mechanism of the Facelets technology. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <body> <form jsfc="h:form"> <span jsfc="h:outputText" value="Welcome to our page: #{user.name}" disabled="#{empty user}" /> <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> <input type="submit" jsfc="h:commandButton" value="OK" action="#{bean.doIt}" /> </form> </body></html> The sample code snippet above uses the mentioned expression language (for example, the #{user.name} expression accesses the name property from the user instance) of the JSF technology to access the data. What is component aliasingOne of the mentioned features of the Facelets technology is that it is possible to view a page directly in a browser without that the page is running inside a JEE container environment. This is possible through the component aliasing feature. With this feature, you can use normal HTML elements, for example an input element. Additionally, you can refer to the component which is used behind the scenes with the jsfc attribute. An example for that is <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> . If you open this inside a browser, the normal input element is used. If you use it inside your application, the h:inputText element of the component library is used     The ResourceServlet One main part of the JSF framework are the components for the GUI. These components often consist of many files besides the class files. If you use many of these components, the problem of handling these files arises. To solve this problem, the files such as JavaScript and CSS (Cascading Style Sheets) can be delivered inside the JAR archive of the component. If you deliver the file inside the JAR file, you can organize the components in one file and therefore it is easier for the deployment and maintenance of your component library. Regardless of the framework you use, the result is HTML. The resources inside the HTML pages are required as URLs. For that, we need a way to access these resources inside the archive with the HTTP protocol. To solve that problem, there is a servlet with the name ResourceServlet (package org.springframework.js.resource). The servlet can deliver the following resources: Resources which are available inside the web application (for example, CSS files) Resources inside a JAR archive The configuration of the servlet inside web.xml is shown below: <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> <load-on-startup>0</load-on-startup></servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern></servlet-mapping> It is important that you use the correct url-pattern inside servlet-mapping. As you can see in the sample above, you have to use /resources/*. If a component does not work (from the Spring Faces components), first check if you have the correct mapping for the servlet. All resources in the context of Spring Faces should be retrieved through this Servlet. The base URL is /resources. Internals of the ResourceServlet ResourceServlet can only be accessed via a GET request. The ResourceServlet servlet implements only the GET method. Therefore, it's not possible to serve POST requests. Before we describe the separate steps, we want to show you the complete process, illustrated in the diagram below: For a better understanding, we choose an example for the explanation of the mechanism which is shown in the previous diagram. Let us assume that we have registered the ResourcesServlet as mentioned before and we request a resource by the following sample URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css. How to request more than one resource with one requestFirst, you can specify the appended parameter. The value of the parameter is the path to the resource you want to retrieve. An example for that is the following URL: http://localhost:8080/ flowtracweb-jsf/resources/css/test1.css?appended=/css/test2.css. If you want to specify more than one resource, you can use the delimiter comma inside the value for the appended parameter. A simple example for that mechanism is the following URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css?appended=/css/test2.css, http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css?appended=/css/test3.css. Additionally, it is possible to use the comma delimiter inside the PathInfo. For example: http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css,/css/test2.css. It is important to mention that if one resource of the requested resources is not available, none of the requested resources is delivered. This mechanism can be used to deliver more than one CSS in one request. From the view of development, it can make sense to modularize your CSS files to get more maintainable CSS files. With that concept, the client gets one CSS, instead of many CSS files. From the view of performance optimization, it is better to have as few requests for rendering a page as possible. Therefore, it makes sense to combine the CSS files of a page. Internally, the files are written in the same sequence as they are requested. To understand how a resource is addressed, we separate the sample URL into the specific parts. The example URL is a URL on a local servlet container which has an HTTP connector at port 8080. See the following diagram for the mentioned separation: The table below describes the five sections of the URL that are shown in the previous diagram:
Read more
  • 0
  • 1
  • 26934

article-image-create-configure-azure-virtual-machine
Gebin George
25 May 2018
13 min read
Save for later

How to create and configure an Azure Virtual Machine

Gebin George
25 May 2018
13 min read
Creating virtual machines on Azure gives you on-demand, high-scale, secure, virtualized infrastructure using Windows Server. Virtual machine helps you deploy and scale applications easily. In this article, we will learn how to run an Azure Virtual Machine. This tutorial is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. This book will help you efficiently monitor, diagnose, and troubleshoot Azure Networking. Creating an Azure VM is a very straightforward process – all you have to do is follow the given steps: Navigate to the Azure portal and search for Virtual Machines, as shown in the following screenshot: Figure 3.1: Searching for Virtual Machines Once the VM blade is opened, you can click on +Add to create a new VM, as shown in the following screenshot: Figure 3.2: Virtual Machines blade Once you have clicked on +Add, a new blade will pop up where you have to search for and select the desired OS for the VM, as shown in the following screenshot: Figure 3.3: Searching for Windows Server 2016 OS for the VM Once the OS is selected, you need to select the deployment model, whether that be Resource Manager or Classic, as shown in the following screenshot: Figure 3.4: Selecting the deployment model Once the deployment model is selected, a new blade will pop up where you have to specify the following: Name: Specify the name of the VM. VM disk type: Specify whether the disk type will be SSD or HDD. Consider that SSD will offer consistent, low-latency performance, but will incur more charges. Note that this option is not available for the Classic model in this blade, but is available in the Configure optional features blade. User name: Specify the username that will be used to log on the VM. Password: Specify the password, which must be between 12 and 123 characters long and must contain three of the following: one lowercase character, one uppercase character, one number, and one special character that is not or -. Subscription: This specifies the subscription that will be charged for the VM usage. Resource group: This specifies the resource group within which the VM will exist. Location: Specify the location in which the VM will be created. It is recommended that you select the nearest location to you. Save money: Here, you specify whether you own Windows Server Licenses with active Software Assurance (SA). If you do, Azure Hybrid Benefit is recommended to save compute costs. For more information about Azure Hybrid Benefit, you can check this page. Figure 3.5: Configure the VM basic settings Once you have clicked on OK, a new blade will pop up where you have to specify the VM size so that the VMs series can select the one that will fulfil your needs, as shown in the following screenshot: Figure 3.6: Select the VM size Once the VM size has been specified, you need to specify the following settings: Availability set: This option provides High availability for the VM by setting the VMs within the same application and availability set. Here, the VMs will be in different fault and update domains, granting the VMs high availability (up to 99.95% of Azure SLA). Use managed disks: Enable this feature to have Azure automatically manage the availability of disks to provide data redundancy and fault tolerance without creating and managing storage accounts on your own. This setting is not available in the Classic model. Virtual network: Specify the virtual network to which you want to assign the VM. Subnet: Select the subnet within the virtual network that you specified earlier to assign the VM to. Public IP address: Either select an existing public IP address or create a new one. Network security group (firewall): Select the NSG you want to assign to the VM NIC. This is called endpoints in the Classic model. Extensions: You can add more features to the VM using extensions, such as configuration management, antivirus protection, and so on. Auto-shutdown: Specify whether you want to shut down your VM daily or not; if you do, you can set a schedule. Considering that this option will help you saving compute cost especially for dev and test scenarios. This is not available in the Classic model. Notification before shutdown: Check this if you enabled Auto-shutdown and want to subscribe for notifications before the VM shuts down. This is not available in the Classic model. Boot diagnostics: This captures serial console output and screenshots of the VM running on a host to help diagnose start up issues. Guest OS diagnostics: This obtains metrics for the VM every minute; you can use these metrics to create alerts and stay informed of your applications. Diagnostics storage account: This is where metrics are written, so you can analyze them with your own tools. Figure 3.7: Specify more settings for the VM Enabling Boot diagnostics and Guest diagnostics will incur more charges since the diagnostics will need a dedicated storage account to store their data. Finally, once you are done with its settings, Azure will validate those you have specified and summarize them, as shown in the following screenshot: Figure 3.8: VM Settings Summary Once clicked on, Create the VM will start the creation process, and within minutes the VM will be created. Once the VM is created, you can navigate to the Virtual Machines blade to open the VM that has been created, as shown in the following screenshot: Figure 3.9: The created VM overview To connect to the VM, click on Connect, where a pre-configured RDP file with the required VM information will be downloaded. Open the RDP file. You will be asked to enter the username and password you specified for the VM during its configuration, as shown in the following screenshot: Figure 3.10: Entering the VM credentials Voila! You should now be connected to the VM. Azure VMs networking There are many network configurations that can be done for the VM. You can add additional NICs, change the private IP address, set a private or public IP address to be either static or dynamic, and you can change the inbound and outbound security rules. Adding inbound and outbound rules Adding inbound and outbound security rules to the VM NIC is a very simple process; all you need to do is follow these steps: Navigate to the desired VM. Scroll down to Networking, under SETTINGS, as shown in the following screenshot: Figure 3.11: VM networking settings To add inbound and outbound security rules, you have to click on either Add inbound or Add outbound. Once clicked on, a new blade will pop up where you have to specify settings using the following fields: Service: The service specifies the destination protocol and port range for this rule. Here, you can choose a predefined service, such as RDP or SSH, or provide a custom port range. Port ranges: Here, you need to specify a single port, a port range, or a comma-separated list of single ports or port ranges. Priority: Here, you enter the desired priority value. Name: Specify a name for the rule here. Description: Write a description for the rule that relates to it here. Figure 3.12: Adding an inbound rule Once you have clicked OK, the rule will be applied. Note that the same process applies when adding an outbound rule. Adding an additional NIC to the VM Adding an additional NIC starts from the same blade as adding inbound and outbound rules. To add an additional NIC, you have to follow the given steps: Before adding an additional NIC to the VM, you need to make sure that the VM is in a Stopped (Deallocated) status. Navigate to Networking on the desired VM. Click on Attach network interface, and a new blade will pop up. Here, you have to either create a network interface or select an existing one. If you are selecting an existing interface, simply click on OK and you are done. If you are creating a new interface, click on Create network interface, as shown in the following screenshot: Figure 3.13: Attaching network interface A new blade will pop up where you have to specify the following: Name: The name of the new NIC. Virtual network: This field will be grayed out because you cannot attach a VM's NIC to different virtual networks. Subnet: Select the desired subnet within the virtual network. Private IP address assignment: Specify whether you want to allocate this IP dynamically or statically. Network security group: Specify an NSG to be assigned to this NIC. Private IP address (IPv6): If you want to assign an IPv6 to this NIC, check this setting. Subscription: This field will be grayed out because you cannot have a VM's NIC in a different subscription. Resource group: Specify the resource group to which the NIC will exist. Location: This field will be grayed out because you cannot have VM NICs in different locations. Figure 3.14: Specify the NIC settings Once you are done, click Create. Once the network interface is created, you will return to the previous blade. Here, you need to specify the NIC you just created and click on OK, as shown in the following screenshot: Figure 3.15: Attaching the NIC Configuring the NICs The Network Interface Cards (NICs) include some configuration that you might be interested in. They are as follows: To navigate to the desired NIC, you can search for the network interfaces blade, as shown in the following screenshot: Figure 3.16: Searching for network interfaces blade Then, the blade will pop up, from which you can select the desired NIC, as shown in the following screenshot: Figure 3.17: Select the desired NIC You can also navigate back to the VM via | Networking | and then click on the desired NIC, as shown in the following screenshot: Figure 3.18: The VM NIC To configure the NIC, you need to follow the given steps: Once the NIC blade is opened, navigate to IP configurations, as shown in the following screenshot: Figure 3.19: NIC blade overview To enable IP forwarding, click on Enabled and then click Save. Enabling this feature will cause the NIC to receive traffic that is not destined to its own IP address. Traffic will be sent with a different source IP. To add another IP to the NIC, click on Add, and a new blade will pop up, for which you have to specify the following: Name: The name of the IP. Type: This field will be grayed out because a primary IP already exists. Therefore, this one will be secondary. Allocation: Specify whether the allocation method is static or dynamic. IP address: Enter the static IP address that belongs to the same subnet that the NIC belongs to. If you have selected dynamic allocation, you cannot enter the IP address statically. Public IP address: Specify whether or not you need a public IP address for this IP configuration. If you do, you will be asked to configure the required settings. Figure 3.20: Configure the IP configuration settings Click on Configure required settings for the public IP address and a new blade will pop up from which you can select an existing public IP address or create a new one, as shown in the following screenshot: Figure 3.21: Create a new public IP address Click on OK and you will return to the blade, as shown in Figure 3.20, with the following warning: Figure 3.22: Warning for adding a new IP address In this case, you need to plan for the addition of a new IP address to ensure that the time the VM is restarted is not during working hours. Azure VNets considerations for Azure VMs Building VMs in Azure is a common task, but to do this task well, and to make it operate properly, you need to understand the considerations of Azure VNets for Azure VMs. These considerations are as follows: Azure VNets enable you to bring your own IPv4/IPv6 addresses and assign them to Azure VMs, statically or dynamically You do not have access to the role that acts as DHCP or provides IP addresses; you can only control the ranges you want to use in the form of address ranges and subnets Installing a DHCP role on one of the Azure VMs is currently unsupported; this is because Azure does not use traditional Layer-2 or Layer-3 topology, and instead uses Layer-3 traffic with tunneling to emulate a Layer-2 LAN Private IP addresses can be used for internal communication; external communication can be done via public IP addresses You can assign multiple private and public IP addresses to a single VM You can assign multiple NICs to a single VM By default, all the VMs within the same virtual network can communicate with each other, unless otherwise specified by an NSG on a subnet within this virtual network The network security group (NSG) can sometimes cause an overhead; without this overhead, however, all VMs within the same subnet would communicate with each other By default, an inbound security rule is created for remote desktops for Windows-based VMs, and SSH for Linux-based VMs The inbound security rules are first applied on the NSG of the subnet and then the VM NIC NSG – for example, if the subnet's NSG allows HTTP traffic, it will pass through it; however, it may not reach its destination if the VM NIC NSG does not allow it The outbound security rules are applied for the VM NIC NSG first, and then applied on the subnet NSG Multiple NICs assigned to a VM can exist in different subnets Azure VMs with multiple NICs in the same availability set do not have to have the same number of NICs, but the VMs must have at least two NICs When you attach an NIC to a VM, you need to ensure that they exist in the same location and subscription The NIC and the VNet must exist in the same subscription and location The NIC's MAC address cannot be changed until the VM to which the NIC is assigned is deleted Once the VM is created, you cannot change the VNet to which it is assigned; however, you can change the subnet to which the VM is assigned You cannot attach an existing NIC to a VM during its creation, but you can add an existing NIC as an additional NIC By default, a dynamic public IP address is assigned to the VM during creation, but this address will change if the VM is stopped or deleted; to ensure it will not change, you need to ensure its IP address is static In a multi-NIC VM, the NSG that is applied to one NIC does not affect the others If you found this post useful, do check out the book Hands-On Networking with Azure, to design and implement Azure Networking for Azure VMs. Read More Introducing Azure Sphere – A secure way of running your Internet of Things devices Learn Azure Serverless computing for free: Download e-book
Read more
  • 0
  • 0
  • 26931

article-image-bitcoin
Packt
10 Feb 2017
13 min read
Save for later

Bitcoin

Packt
10 Feb 2017
13 min read
In this article by Imran Bashir, the author of the book Mastering Blockchain, will see about bitcoin and it's importance in electronic cash system. (For more resources related to this topic, see here.) Bitcoin is the first application of blockchain technology. In this article readers will be introduced to the bitcoin technology in detail. Bitcoin has started a revolution with the introduction of very first fully decentralized digital currency that has been proven to be extremely secure and stable. This has also sparked great interest in academic and industrial research and introduced many new research areas. Since its introduction in 2008 bitcoin has gained much popularity and currently is the most successful digital currency in the world with billions of dollars invested in it. It is built on decades of research in the field of cryptography, digital cash and distributed computing. In the following section brief history is presented in order to provide background required to understand the foundations behind the invention of bitcoin. Digital currencies have always been an active area of research for many decades. Early proposals to create digital cash goes as far back as the early 1980s. In 1982 David Chaum proposed a scheme that used blind signatures to build untraceable digital currency. In this scheme a bank would issue digital money by signing a blinded and random serial number presented to it by the user. The user can then use the digital token signed by the bank as currency. The limitation in this scheme is that the bank has to keep track of all used serial numbers. This is a central system by design and requires to be trusted by the users. Later on in 1990 David Chaum proposed a refined version named ecash that not only used blinded signature but also some private identification data to craft a message that was then sent to the bank. This scheme allowed detection of double spending but did not prevent it. If the same token is used at two different location then the identity of the double spender would be revealed. ecash could only represent fixed amount of money. Adam Back's hashcash introduced in 1997 was originally proposed to thwart the email spam. The idea behind hashcash is to solve a computational puzzle that is easy to verify but is comparatively difficult to compute. The idea is that for a single user and single email extra computational effort it not noticeable but someone sending large number of spam emails would be discouraged as the time and resources required to run the spam campaign will increase substantially. B-money was proposed by Wei Dai in 1998 which introduced the idea of using proof of work to create money. Major weakness in the system was that some adversary with higher computational power could generate unsolicited money without giving the chance to the network to adjust to an appropriate difficulty level. The system was lacking details on the consensus mechanism between nodes and some security issues like Sybil attacks were also not addressed. At the same time Nick Szabo introduced the concept of bit gold which was also based on proof of work mechanism but had same problems as b-money had with one exception that network difficulty level was adjustable. Tomas Sander and Ammon TaShama introduced an ecash scheme in 1999 that for the first time used merkle trees to represent coins and zero knowledge proofs to prove possession of coins. In the scheme a central bank was required who kept record of all used serial numbers. This scheme allowed users to be fully anonymous albeit at some computational cost. RPOW (Reusable Proof of Work) was introduced by Hal Finney in 2004 that used hash cash scheme by Adam Back as a proof of computational resources spent to create the money. This was also a central system that kept a central database to keep track of all used PoW tokens. This was an online system that used remote attestation made possible by trusted computing platform (TPM hardware). All the above mentioned schemes are intelligently designed but were weak from one aspect or another. Especially all these schemes rely on a central server which is required to be trusted by the users. Bitcoin In 2008 bitcoin paper Bitcoin: A Peer-to-Peer Electronic Cash System was written by Satoshi Nakamoto. First key idea introduced in the paper is that it is a purely peer to peer electronic cash that does need an intermediary bank to transfer payments between peers. Bitcoin is built on decades of Cryptographic research like merkle trees, hash functions, public key cryptography and digital signatures. Moreover ideas like bit gold, b-money, hashcash and cryptographic time stamping have provided the foundations for bitcoin invention. All these technologies are cleverly combined in bitcoin to create world's first decentralized currency. Key issue that has been addressed in bitcoin is an elegant solution to Byzantine Generals problem along with a practical solution of double spend problem. Value of bitcoin has increased significantly since 2011 as shown in the graph below: Bitcoin price and volume since 2012 (on logarithmic scale) Regulation of bitcoin is a controversial subject and as much as it is a libertarian's dream law enforcement agencies and governments are proposing various regulations to control it such as bitlicense issued by NewYorks state department of financial services. This is a license issued to businesses which perform activities related to virtual currencies. Growth of bitcoin is also due to so called Network Effect. Also called demand-side economies of scale, it is a concept which basically means that more users who use the network the more valuable it becomes. Over time exponential increase has been seen in bitcoin network growth. Even though the price of bitcoin is quite volatile it has increased significantly over a period of last few years. Currently (at the time of writing) bitcoin price is 815 GBP. Bitcoin definition Bitcoin can be defined in various ways, it's a protocol, a digital currency and a platform. It is a combination of peer to peer network, protocols and software that facilitate the creation and usage of digital currency named bitcoin. Note that Bitcoin with capital B is used to refer to Bitcoin protocol whereas bitcoin with lower case b is used to refer to bitcoin, the currency. Nodes in this peer to peer to network talk to each other using the Bitcoin protocol. Decentralization of currency was made possible for the first time with the invention of Bitcoin. Moreover double spending problem was solved in an elegant and ingenious way in bitcoin. Double spending problem arises when for example a user sends coins to two different users at the same time and they will be verified independently as valid transactions. Keys and addresses Elliptic curve cryptography is used to generate public and private key pairs in the Bitcoin network. The Bitcoin address is created by taking the corresponding public key of a private key and hashing it twice, first with SHA256 algorithm and then with RIPEMD160. The resultant 160-bit hash is then prefixed with a version number and finally encoded with Base58Check encoding scheme. The bitcoin addresses are 26 to 35 characters long and begin with digit 1 or 3. A typical bitcoin address looks like a string shown as follows: 1ANAguGG8bikEv2fYsTBnRUmx7QUcK58wt This is also commonly encoded in a QR code for easy sharing. The QR code of the preceding shown address is as follows: QR code of a bitcoin address 1ANAguGG8bikEv2fYsTBnRUmx7QUcK58wt There are currently two types of addresses, commonly used P2PKH and another P2SH type starting with 1 and 3 respectively. In early days bitcoin used direct Pay-to-Pubkey which is now superseded by P2PKH. However direct Pay-to-Pubkey is still used in bitcoin for coinbase addresses. Addresses should not be used for more than once otherwise privacy and security issues can arise. Avoiding address reuse circumvents anonymity issues to some extent, bitcoin has some other security issues also, such as transaction malleability which requires different approach to resolve. from bitaddress.org private key and bitcoin address in a paper wallet Public keys in bitcoin In Public key cryptography, public keys are generated from private keys. Bitcoin uses ECC based on SECP256K1 standard. A private key is randomly selected and is 256-bit in length. Public keys can be presented in uncompressed or compressed format. Public keys are basically x and y coordinates on an elliptic curve and in uncompressed format are presented with a prefix of 04 in hexadecimal format. X and Y co-ordinates are both 32-bit in length. In total the compressed public key is 33 bytes long as compared to 65 bytes in uncompressed format. Compressed version of public keys basically include only X part, since Y part can be derived from it. The reason why compressed version of public keys works is that bitcoin client initially used uncompressed keys, but starting from bitcoin core client 0.6 compressed keys are used as standard. Keys are identified by various prefixes described as follows: Uncompressed public keys used 0x04 as prefix. Compressed public key starts with 0x03 if the y 32-bit part of the public key is odd. Compressed public key starts with 0x02 if the y 32-bit part of the public key is even. More mathematical description and reason why it works is described later. If the ECC graph is visualized it reveals that the y co-ordinate can be either below the x-axis or above the x-axis and as the curve is symmetric only the location in the prime field is required to be stored. Private keys in bitcoin Private keys are basically 256-bit numbers chosen in the range specified by SECP256K1 ECDSA recommendation. Any randomly chosen 256-bit number from 0x1 to 0xFFFF FFFF FFFF FFFF FFFF FFFF FFFF FFFE BAAE DCE6 AF48 A03B BFD2 5E8C D036 4140 is a valid private key. Private keys are usually encoded using Wallet Import Format (WIF) in order to make them easier to copy and use. WIF can be converted to private key and vice versa. Steps are described later. Also Mini Private Key Format is used sometimes to encode the key in under 30 characters to allow storage where physical space is limited, for example, etching on physical coins or damage resistant QR codes. Bitcoin core client also allows encryption of the wallet which contains the private keys. Bitcoin currency units Bitcoin currency units are described as follows. Smallest bitcoin denomination is Satoshi. Base58Check encoding This encoding is used to limit the confusion between various characters such as 0OIl as they can look same in different fonts. The encoding basically takes the binary byte arrays and converts them into human readable string. This string is composed by utilizing a set of 58 alphanumeric symbols. More explanation and logic can be found in base58.h source file in bitcoin source code. Explanation from bitcoin source code Bitcoin addresses are encoded using Base58check encoding. Vanity addresses As the bitcoin addresses are based on base 58 encoding, it is possible to generate addresses that contain human readable messages. An example is shown as follows: Public address encoded in QR Vanity addresses are generated by using a purely brute force method. An example is shown as follows: Vanity address generated from https://bitcoinvanitygen.com/ Transactions Transactions are at the core of bitcoin ecosystem. Transactions can be as simple as just sending some bitcoins to a bitcoin address or it can be quite complex depending on the requirements. Each transaction is composed of at least one input and output. Inputs can be thought of as coins being spent that have been created in a previous transaction and outputs as coins being created. If a transaction is minting new coins then there is no input and therefore no signature is needed. If a transaction is to send coins to some other user (a bitcoin address), then it needs to be signed by the sender with their private key and also a reference is required to the previous transaction to show the origin of the coins. Coins are in fact unspent transactions outputs represented in Satoshis. Transactions are not encrypted and are publicly visible in the blockchain. Blocks are made up of transactions and these can be viewed by using any online blockchain explorer. Transaction life cycle A user/sender sends a transaction using wallet software or some other interface. Wallet software signs the transaction using the sender's private key. Transaction is broadcasted to the Bitcoin network using a flooding algorithm. Mining nodes include this transaction in the next block to be mined. Mining starts and once a miner who solves the Proof of Work problem broadcasts the newly mined block to the network. Proof of Work is explained in detail later. The nodes verify the block and propagate the block further and confirmation start to generate. Finally the confirmations start to appear in the receiver's wallet and after approximately 6 confirmations the transaction is considered finalized and confirmed. However 6 is just a recommended number , the transaction can be considered final even after first confirmation. The key idea behind waiting for six confirmations is that the probability of double spending virtually eliminates after 6 confirmations. Transaction structure A transaction at a high level contains metadata, inputs and outputs. Transactions are combined together to create a block. The transaction structure is shown in the following table: Field Size Description Version Number 4 bytes Used to specify rules to be used by the miners and nodes for transaction processing. Input counter 1 to 9 bytes Number of inputs included in the transaction. list of inputs variable Each input is composed of several fields including Previous Transaction hash, Previous Txout-index, Txin-script length, Txin-script and optional sequence number. The first transaction in a block is also called coinbase transaction. Specifies on or more transaction inputs. Out-counter 1 to 9 bytes positive integer representing the number of outputs. list of outputs variable Outputs included in the transaction. lock_time 4 bytes It defines the earliest time when a transaction becomes valid. It is either a Unix timestamp or block number. MetaData: This part of the transaction contains some values like size of transaction, number of inputs and outputs, hash of the transaction and a lock_time field. Every transaction has a prefix specifying the version number. Inputs: Generally each input spends a previous output. Each output is considered a UTXO, Unspent transaction output until an input consumes it. Outputs: Outputs have only two fields and it contains instructions for sending bitcoins. First field contains the amount of Satoshis where as second field is a locking script which contains the conditions that needs to be met in order for the output to be spent. More information about transaction spending by using locking and unlocking scripts and producing outputs is discussed later. Verification: Verification is performed by using Bitcoin's scripting language Summary In this article, we learned the importance of bitcoin in digital currency and how bitcoins are encoded using various private keys and encoding techniques. Resources for Article: Further resources on this subject: Bitcoins – Pools and Mining [article] Protecting Your Bitcoins [article] FPGA Mining [article]
Read more
  • 0
  • 0
  • 26924

article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 26910
article-image-convolutional-neural-networks-reinforcement-learning
Packt
06 Apr 2017
9 min read
Save for later

Convolutional Neural Networks with Reinforcement Learning

Packt
06 Apr 2017
9 min read
In this article by Antonio Gulli, Sujit Pal, the authors of the book Deep Learning with Keras, we will learn about reinforcement learning, or more specifically deep reinforcement learning, that is, the application of deep neural networks to reinforcement learning. We will also see how convolutional neural networks leverage spatial information and they are therefore very well suited for classifying images. (For more resources related to this topic, see here.) Deep convolutional neural network A Deep Convolutional Neural Network (DCCN) consists of many neural network layers. Two different types of layers, convolutional and pooling, are typically alternated. The depth of each filter increases from left to right in the network. The last stage is typically made of one or more fully connected layers as shown here: There are three key intuitions beyond ConvNets: Local receptive fields Shared weights Pooling Let's review them together. Local receptive fields If we want to preserve the spatial information, then it is convenient to represent each image with a matrix of pixels. Then, a simple way to encode the local structure is to connect a submatrix of adjacent input neurons into one single hidden neuron belonging to the next layer. That single hidden neuron represents one local receptive field. Note that this operation is named convolution and it gives the name to this type of networks. Of course we can encode more information by having overlapping submatrices. For instance let's suppose that the size of each single submatrix is 5 x 5 and that those submatrices are used with MNIST images of 28 x 28 pixels, then we will be able to generate 23 x 23 local receptive field neurons in the next hidden layer. In fact it is possible to slide the submatrices by only 23 positions before touching the borders of the images. In Keras, the size of each single submatrix is called stride-length and this is an hyper-parameter which can be fine-tuned during the construction of our nets. Let's define the feature map from one layer to another layer. Of course we can have multiple feature maps which learn independently from each hidden layer. For instance we can start with 28 x 28 input neurons for processing MINST images, and then recall k feature maps of size 23 x 23 neurons each (again with stride of 5 x 5) in the next hidden layer. Shared weights and bias Let's suppose that we want to move away from the pixel representation in a row by gaining the ability of detecting the same feature independently from the location where it is placed in the input image. A simple intuition is to use the same set of weights and bias for all the neurons in the hidden layers. In this way each layer will learn a set of position-independent latent features derived from the image. Assuming that the input image has shape (256, 256) on 3 channels with tf (Tensorflow) ordering, this is represented as (256, 256, 3). Note that with th (Theano) mode the channels dimension (the depth) is at index 1, in tf mode is it at index 3. In Keras if we want to add a convolutional layer with dimensionality of the output 32 and extension of each filter 3 x 3 we will write: model = Sequential() model.add(Convolution2D(32, 3, 3, input_shape=(256, 256, 3)) This means that we are applying a 3 x 3 convolution on 256 x 256 image with 3 input channels (or input filters) resulting in 32 output channels (or output filters). An example of convolution is provided in the following diagram: Pooling layers Let's suppose that we want to summarize the output of a feature map. Again, we can use the spatial contiguity of the output produced from a single feature map and aggregate the values of a submatrix into one single output value synthetically describing the meaning associated with that physical region. Max pooling One easy and common choice is the so-called max pooling operator which simply outputs the maximum activation as observed in the region. In Keras, if we want to define a max pooling layer of size 2 x 2 we will write: model.add(MaxPooling2D(pool_size = (2, 2))) An example of max pooling operation is given in the following diagram: Average pooling Another choice is the average pooling which simply aggregates a region into the average values of the activations observed in that region. Keras implements a large number of pooling layers and a complete list is available online. In short, all the pooling operations are nothing more than a summary operation on a given region. Reinforcement learning Our objective is to build a neural network to play the game of catch. Each game starts with a ball being dropped from a random position from the top of the screen. The objective is to move a paddle at the bottom of the screen using the left and right arrow keys to catch the ball by the time it reaches the bottom. As games go, this is quite simple. At any point in time, the state of this game is given by the (x, y) coordinates of the ball and paddle. Most arcade games tend to have many more moving parts, so a general solution is to provide the entire current game screen image as the state. The following diagram shows four consecutive screenshots of our catch game: Astute readers might note that our problem could be modeled as a classification problem, where the input to the network are the game screen images and the output is one of three actions - move left, stay, or move right. However, this would require us to provide the network with training examples, possibly from recordings of games played by experts. An alternative and simpler approach might be to build a network and have it play the game repeatedly, giving it feedback based on whether it succeeds in catching the ball or not. This approach is also more intuitive and is closer to the way humans and animals learn. The most common way to represent such a problem is through a Markov Decision Process (MDP). Our game is the environment within which the agent is trying to learn. The state of the environment at time step t is given by st (and contains the location of the ball and paddle). The agent can perform certain actions at (such as moving the paddle left or right). These actions can sometimes result in a reward rt, which can be positive or negative (such as an increase or decrease in the score). Actions change the environment and can lead to a new state st+1, where the agent can perform another action at+1, and so on. The set of states, actions and rewards, together with the rules for transitioning from one state to the other, make up a Markov decision process. A single game is one episode of this process, and is represented by a finite sequence of states, actions and rewards: Since this is a Markov decision process, the probability of state st+1 depends only on current state st and action at. Maximizing future rewards As an agent, our objective is to maximize the total reward from each game. The total reward can be represented as follows: In order to maximize the total reward, the agent should try to maximize the total reward from any time point t in the game. The total reward at time step t is given by Rt and is represented as: However, it is harder to predict the value of the rewards the further we go into the future. In order to take this into consideration, our agent should try to maximize the total discounted future reward at time t instead. This is done by discounting the reward at each future time step by a factor γ over the previous time step. If γ is 0, then our network does not consider future rewards at all, and if γ is 1, then our network is completely deterministic. A good value for γ is around 0.9. Factoring the equation allows us to express the total discounted future reward at a given time step recursively as the sum of the current reward and the total discounted future reward at the next time step: Q-learning Deep reinforcement learning utilizes a model-free reinforcement learning technique called Q-learning. Q-learning can be used to find an optimal action for any given state in a finite Markov decision process. Q-learning tries to maximize the value of the Q-function which represents the maximum discounted future reward when we perform action a in state s: Once we know the Q-function, the optimal action a at a state s is the one with the highest Q-value. We can then define a policy π(s) that gives us the optimal action at any state: We can define the Q-function for a transition point (st, at, rt, st+1) in terms of the Q-function at the next point (st+1, at+1, rt+1, st+2) similar to how we did with the total discounted future reward. This equation is known as the Bellmann equation. The Q-function can be approximated using the Bellman equation. You can think of the Q-function as a lookup table (called a Q-table) where the states (denoted by s) are rows and actions (denoted by a) are columns, and the elements (denoted by Q(s, a)) are the rewards that you get if you are in the state given by the row and take the action given by the column. The best action to take at any state is the one with the highest reward. We start by randomly initializing the Q-table, then carry out random actions and observe the rewards to update the Q-table iteratively according to the following algorithm: initialize Q-table Q observe initial state s repeat select and carry out action a observe reward r and move to new state s' Q(s, a) = Q(s, a) + α(r + γ maxa' Q(s', a') - Q(s, a)) s = s' until game over You will realize that the algorithm is basically doing stochastic gradient descent on the Bellman equation, backpropagating the reward through the state space (or episode) and averaging over many trials (or epochs). Here α is the learning rate that determines how much of the difference between the previous Q-value and the discounted new maximum Q-value should be incorporated. Summary We have seen the application of deep neural networks, reinforcement learning. We have also seen convolutional neural networks and how they are well suited for classifying images. Resources for Article: Further resources on this subject: Deep learning and regression analysis [article] Training neural networks efficiently using Keras [article] Implementing Artificial Neural Networks with TensorFlow [article]
Read more
  • 0
  • 0
  • 26890

article-image-behavior-scripting-in-c-and-javascript-for-game-developers
Packt Editorial Staff
16 Apr 2018
16 min read
Save for later

Behavior Scripting in C# and Javascript for game developers

Packt Editorial Staff
16 Apr 2018
16 min read
The common idea about game behaviors - things like enemy AI, or sequences of events, or the rules of a puzzle – are expressed in a scripting language, probably in a simple top-to-bottom recipe form, without using objects or much branching. Behaviour scripts are often associated with an object instance in game code – expressed in an object-oriented language such as C++ or C# – which does the work. In today’s post, we will introduce you to new classes and behavior scripts. The details of a new C# behavior and a new JavaScript behavior are also covered. We will further explore: Wall attack Declaring public variables Assigning scripts to objects Moving the camera To take your first steps into programming, we will look at a simple example of the same functionality in both C# and JavaScript, the two main programming languages used by Unity developers. It is also possible to write Boo-based scripts, but these are rarely used except by those with existing experience in the language. To follow the next steps, you may choose either JavaScript or C#, and then continue with your preferred language. To begin, click on the Create button on the Project panel, then choose either JavaScript or C# script, or simply click on the Add Component button on the Main CameraInspector panel. Your new script will be placed into the Project panel named NewBehaviourScript, and will show an icon of a page with either JavaScript or C# written on it. When selecting your new script, Unity offers a preview of what is already in the script, in the view of the Inspector, and an accompanying Edit button that when clicked on will launch the script into the default script editor, MonoDevelop. You can also launch a script in your script editor at any time by double-clicking on its icon in the Project panel. New behaviour script or class New scripts can be thought of as a new class in Unity terms. If you are new to programming, think of a class as a set of actions, properties, and other stored information that can be accessed under the heading of its name. For example, a class called Dogmay contain properties such as color, breed, size, or genderand have actions such as rolloveror fetchStick. These properties can be described as variables, while the actions can be written in functions, also known as methods. In this example, to refer to the breedvariable, a property of the Dogclass, we might refer to the class it is in, Dog, and use a period (full stop) to refer to this variable, in the following way: Dog.breed; If we want to call a function within the Dogclass, we might say, for example, the following: Dog.fetchStick(); We can also add arguments into functions-these aren't the everyday arguments we have with one another! Think of them as more like modifying the behavior of a function, for example, with our fetchStickfunction, we might build in an argument that defines how quickly our dog will fetch the stick. This might be called as follows: Dog.fetchStick(25); While these are abstract examples, often it can help to transpose coding into commonplace examples in order to make sense of them. As we continue, think back to this example or come up with some examples of your own, to help train yourself to understand classes of information and their properties. When you write a script in C# or JavaScript, you are writing a new class or classes with their own properties (variables) and instructions (functions) that you can call into play at the desired moment in your games. What's inside a new C# behaviour When you begin with a new C# script, Unity gives you the following code to get started: usingUnityEngine; usingSystem.Collections; publicclassNewBehaviourScript:MonoBehaviour{ //UsethisforinitializationvoidStart(){ } //UpdateiscalledonceperframevoidUpdate(){ } } This begins with the necessary two calls to the Unity Engine itself: usingUnityEngine; usingSystem.Collections; It goes on to establish the class named after the script. With C#, you'll be required to name your scripts with matching names to the class declared inside the script itself. This is why you will see publicclassNewBehaviourScript:MonoBehaviour{at the beginning of a new C# document, as NewBehaviourScriptis the default name that Unity gives to newly generated scripts. If you rename your script in the Project panel when it is created, Unity will rewrite the class name in your C# script. Code in classes When writing code, most of your functions, variables, and other scripting elements will be placed within the class of a script in C#. Within-in this context-means that it must occur after the class declaration, and following the corresponding closing }of that, at the bottom of the script. So, unless told otherwise, while following the instructions, assume that your code should be placed within the class established in the script. In JavaScript, this is less relevant as the entire script is the class; it is not explicitly established. Basic functions Unity as an engine has many of its own functions that can be used to call different features of the game engine, and it includes two important ones when you create a new script in C#. Functions (also known as methods) most often start with the voidterm in C#. This is the function's return type, which is the kind of data a function may result in. As most functions are simply there to carry out instructions rather than return information, often you will see voidat the beginning of their declaration, which simply means that a certain type of data will not be returned. Some basic functions are explained as follows: Start(): This is called when the scene first launches, so it is often used as it is suggested in the code, for initialization. For example, you may have a core variable that must be set to 0when the game scene begins or perhaps a function that spawns your player character in the correct place at the start of a level. Update(): This is called in every frame that the game runs, and is crucial for checking the state of various parts of your game during this time, as many different conditions of game objects may change while the game is running. Variables in C# To store information in a variable in C#, you will use the following syntax: typeOfDatanameOfVariable=value; Consider the following example: intcurrentScore=5; Another example would be: floatcurrentVelocity=5.86f; Note that the examples here show numerical data, with intmeaning integer, that is, a whole number, and floatmeaning floating point, that is, a number with a decimal place, which in C# requires a letter fto be placed at the end of the value. This syntax is somewhat different from JavaScript. Refer to the Variables in JavaScript section. What's inside a new JavaScript behaviour? While fulfilling the same functions as a C# file, a new empty JavaScript file shows you less as the entire script itself is considered to be the class, and the empty space in the script is considered to be within the opening and closing of the class, as the class declaration itself is hidden. You will also note that the lines usingUnityEngine;and usingSystem. Collections;are also hidden in JavaScript, so in a new JavaScript, you will simply be shown the Update()function: functionUpdate(){ } You will note that in JavaScript, you declare functions differently, using the term functionbefore the name. You will also need to write a declaration of variables and various other scripted elements with a slightly different syntax. We will look at examples of this as we progress. Variables in JavaScript The syntax for variables in JavaScript works as follows, and is always preceded by the prefix var, as shown: varvariableName:TypeOfData=value; For example: varcurrentScore:int=0; Another example is: varcurrentVelocity:float=5.86; As you must have noticed, the floatvalue does not require a letter ffollowing its value as it does in C#. You will notice as you see further, comparing the scripts written in the two different languages that C# often has stricter rules about how scripts are written, especially regarding implicitly stating types of data that are being used. Comments In both C# and JavaScript in Unity, you can write comments using: //twoforwardslashessymbolsforasinglelinecomment Another way of doing this would be: /*forward-slash,startoopenamultilinecommentsandattheendofit,star,forward-slashtoclose*/ You may write comments in the code to help you remember what each part does as you progress. Remember that because comments are not executed as code, you can write whatever you like, including pieces of code. As long as they are contained within a comment they will never be treated as working code. Wall attack Now let's put some of your new scripting knowledge into action and turn our existing scene into an interactive gameplay prototype. In the Project panel in Unity, rename your newly created script Shooterby selecting it, pressing return (Mac) or F2 (Windows), and typing in the new name. If you are using C#, remember to ensure that your class declaration inside the script matches this name of the script: publicclassShooter:MonoBehaviour{ As mentioned previously, JavaScript users will not need to do this. To kick-start your knowledge of using scripting in Unity, we will write a script to control the camera and allow shooting of a projectile at the wall that we have built. To begin with, we will establish three variables: bullet: This is a variable of type Rigidbody, as it will hold a reference to a physics controlled object we will make power: This is a floating point variable number we will use to set the power of shooting moveSpeed: This is another floating point variable number we will use to define the speed of movement of the camera using the arrow keys These variables must be public member variables, in order for them to display as adjustable settings in the Inspector. You'll see this in action very shortly! Declaring public variables Public variables are important to understand as they allow you to create variables that will be accessible from other scripts-an important part of game development as it allows for simpler inter-object communication. Public variables are also really useful as they appear as settings you can adjust visually in the Inspector once your script is attached to an object. Private variables are the opposite-designed to be only accessible within the scope of the script, class, or function they are defined within, and do not appear as settings in the Inspector. C# Before we begin, as we will not be using it, remove the Start()function from this script by deleting voidStart(){}. To establish the required variables, put the following code snippet into your script after the opening of the class, shown as follows: usingUnityEngine; usingSystem.Collections; publicclassShooter:MonoBehaviour{ publicRigidbodybullet;publicfloatpower=1500f;publicfloatmoveSpeed=2f; voidUpdate(){ } } Note that in this example, the default explanatory comments and the Start()function have been removed in order to save space. JavaScript In order to establish public member variables in JavaScript, you will need to simply ensure that your variables are declared outside of any existing function. This is usually done at the top of the script, so to declare the three variables we need, add the following to the top of your new Shooterscript so that it looks like this: varbullet:Rigidbody;varpower:float=1500;varmoveSpeed:float=5;functionUpdate(){ } Note that JavaScript (UnityScript) is much less declarative and needs less typing to start. Assigning scripts to objects In order for this script to be used within our game it must be attached as a component of one of the game objects within the existing scene. Save your script by choosing File | Save from the top menu of your script editor and return to Unity. There are several ways to assign a script to an object in Unity: Drag it from the Project panel and drop it onto the name of an object in the Hierarchy panel. Drag it from the Project panel and drop it onto the visual representation of the object in the Scene panel. Select the object you wish to apply the script to and then drag and drop the script to empty space at the bottom of the Inspector view for that object. Select the object you wish to apply the script to and then choose Component | Scripts | and the name of your script from the top menu. The most common method is the first approach, and this would be most appropriate since trying to drag to the camera in the Scene View, for example, would be difficult as the camera itself doesn't have a tangible surface to drag to. For this reason, drag your new Shooterscript from the Project panel and drop it onto the name of Main Camera in the Hierarchy to assign it, and you should see your script appear as a new component, following the existing audio listener component. You will also see its three public variables such as bullet, power, and moveSpeedin the Inspector, as follows: You can alternatively act in the Inspector, directly, press the Add Component button, and look for Shooterby typing in the search box. Note, this is valid if you didn't add the component in this way initially. In that case, the Shootercomponent will already be attached to the camera GameObject. As you will see, Unity has taken the variable names and given them capital letters, and in the case of our moveSpeedvariable, it takes a capital letter in the middle of the phrase to signify the start of a new word in the Inspector, placing a space between the two words when seen as a public variable. You can also see here that the bulletvariable is not yet set, but it is expecting an object to be assigned to it that has a Rigidbody attached-this is often referred to as being a Rigidbody object. Despite the fact that, in Unity, all objects in the scene can be referred to as game objects, when describing an object as a Rigidbodyobject in scripting, we will only be able to refer to properties and functions of the Rigidbodyclass. This is not a problem however; it simply makes our script more efficient than referring to the entire GameObjectclass. For more on this, take a look at the script reference documentation for both the classes: GameObject Rigidbody Beware that when adjusting values of public variables in the Inspector, any values changed will simply override those written in the script, rather than replacing them. Let's continue working on our script and add some interactivity; so, return to your script editor now. Moving the camera Next, we will make use of the moveSpeedvariable combined with keyboard input in order to move the camera and effectively create a primitive aiming of our shot, as we will use the camera as the point to shoot from. As we want to use the arrow keys on the keyboard, we need to be aware of how to address them in the code first. Unity has many inputs that can be viewed and adjusted using the Input Manager-choose Edit | Project Settings | Input: As seen in this screenshot, two of the default settings for input are Horizontal and Vertical. These rely on an axis-based input that, when holding the Positive Button, builds to a value of 1, and when holding the Negative Button, builds to a value of -1. Releasing either button means that the input's value springs back to 0, as it would if using a sprung analog joystick on a gamepad. As input is also the name of a class, and all named elements in the Input Manager are axes or buttons, in scripting terms, we can simply use: Input.GetAxis("Horizontal"); This receives the current value of the horizontal keys, that is, a value between -1 and 1, depending upon what the user is pressing. Let's put that into practice in our script now, using local variables to represent our axes. By doing this, we can modify the value of this variable later using multiplication, taking it from a maximum value of 1 to a higher number, allowing us to move the camera faster than 1 unit at a time. This variable is not something that we will ever need to set inside the Inspector, as Unity is assigning values based on our key input. As such, these values can be established as local variables. Local, private, and public variables Before we continue, let's take an overview of local, private, and public variables in order to cement your understanding: Local variables: These are variables established inside a function; they will not be shown in the Inspector, and are only accessible to the function they are in. Private variables: These are established outside a function, and therefore accessible to any function within your class. However, they are also not visible in the Inspector. Public variables: These are established outside a function, are accessible to any function in their class and also to other scripts, apart from being visible for editing in the Inspector. Local variables and receiving input The local variables in C# and JavaScript are shown as follows: C# Here is the code for C#: voidUpdate(){floath=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;floatv=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; JavaScript Here is the code for JavaScript: functionUpdate(){varh:float=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;varv:float=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; The variables declared here-hfor Horizontaland vfor Vertical, could be named anything we like; it is simply quicker to write single letters. Generally speaking, we would normally give these a name, because some letters cannot be used as variable names, for example, x, y, and z, because they are used for coordinate values and therefore reserved for use as such. As these axes' values can be anything from -1 to 1, they are likely to be a number with a decimal place, and as such, we must declare them as floating point type variables. They are then multiplied using the *symbol by Time.deltaTime, which simply means that the value is divided by the number of frames per second (the deltaTimeis the time it takes from one frame to the next or the time taken since the Update()function last ran), which means that the value adds up to a consistent amount per second, regardless of the framerate. The resultant value is then increased by multiplying it by the public variable we made earlier, moveSpeed. This means that although the values of hand vare local variables, we can still affect them by adjusting public moveSpeedin the Inspector, as it is a part of the equation that those variables represent. This is a common practice in scripting as it takes advantage of the use of publicly accessible settings combined with specific values generated by a function. [box type="note" align="" class="" width=""]You read an excerpt from the book Unity 5.x Game Development Essentials, Third Edition written by Tommaso Lintrami. Unity is the most popular game engine among Indie developers, start-ups, and medium to large independent game development companies. This book is a complete exercise in game development covering environments, physics, sound, particles, and much more—to get you up and running with Unity rapidly.[/box] Scripting Strategies Unity 3.x Scripting-Character Controller versus Rigidbody
Read more
  • 0
  • 0
  • 26884

article-image-react-native-tools-and-resources
Packt
03 Jan 2017
14 min read
Save for later

React Native Tools and Resources

Packt
03 Jan 2017
14 min read
In this article written by Eric Masiello and Jacob Friedmann, authors of the book Mastering React Native we will cover: Tools that improve upon the React Native development experience Ways to build React Native apps for platforms other than iOS and Android Great online resources for React Native development (For more resources related to this topic, see here.) Evaluating React Native Editors, Plugins, and IDEs I'm hard pressed to think of another topic that developers are more passionate about than their preferred code editor. Of the many options, two popular editors today are GitHub's Atom and Microsoft's Visual Studio Code (not be confused with the Visual Studio 2015). Both are cross-platform editors for Windows, macOS, and Linux that are easily extended with additional features. In this section, I'll detail my personal experience with these tools and where I have found they complement the React Native development experience. Atom and Nuclide Facebook has created a package for Atom known as Nuclide that provides a first-class development environment for React Native It features a built-in debugger similar to Chrome's DevTools, a React Native Inspector (think the Elements tab in Chrome DevTools), and support for the static type checker Flow. Download Atom from https://atom.io/ and Nuclide from https://nuclide.io/. To install the Nuclide package, click on the Atom menu and then on Preferences..., and then select Packages. Search for Nuclide and click on Install. Once installed, you can actually start and stop the React Native Packager directly from Atom (though you need launch the simulator/editor separately) and set breakpoints in Atom itself rather than using Chrome's DevTools. Take a look at the following screenshot: If you plan to use Flow, Nuclide will identify errors and display them inline. Take the following example, I've annotated the function timesTen such that it expects a number as a parameter it should return a number. However, you can see that there's some errors in the usage. Refer to the following code snippet: /* @flow */ function timesTen(x: number): number { var result = x * 10; return 'I am not a number'; } timesTen("Hello, world!"); Thankfully, the Flow integration will call out these errors in Atom for you. Refer to the following screenshot: Flow integration of Nuclide exposes two other useful features. You'll see annotated auto completion as you type. And, if you hold the Command key and click on variable or function name, Nuclide will jump straight to the source definition, even if it’s defined in a separate file. Refer to the following screenshot: Visual Studio Code Visual Studio Code is a first class editor for JavaScript authors. Out of the box, it's packaged with a built in debugger that can be used to debug Node applications. Additionally, VS Code comes with an integrated Terminal and a git tool that nicely shows visual diffs. Download Visual Studio Code from https://code.visualstudio.com/. The React Native Tools extensions for VS Code add some useful capabilities to the editor. For starters, you'll be able to execute the React Native: Run-iOS and React Native: Run Android commands directly from VS Code without needing to reach for terminal, as shown in the following screenshot: And, while a bit more involved than Atom to configure, you can use VS Code as a React Native debugger. Take a look at the following screenshot: The React Native Tools extension also provides IntelliSense for much of the React Native API, as shown in the following screenshot: When reading through the VS Code documentation, I found it (unsurprisingly) more catered toward Windows users. So, if Windows is your thing, you may feel more at home with VS Code. As a macOS user, I slightly prefer Atom/Nuclide over VS Code. VS Code comes with more useful features out of the box but that easily be addressed by installing a few Atom packages. Plus,  I found the Flow support with Nuclide really useful. But don't let me dissuade you from VS Code. Both are solid editors with great React Native support. And they're both free so no harm in trying both. Before totally switching gears, there is one more editor worth mentioning. Deco is an Integrated Development Environment (IDE) built specifically for React Native development. Standing up a new React Native project is super quick since Deco keeps a local copy of everything you'd get when running react-native in it. Deco also makes creating new stateful and stateless components super easy. Download Deco from https://www.decosoftware.com/. Once you create a new component using Deco, it gives you a nicely prefilled template including a place to add propTypes and defaultProps (something I often forget to do). Refer to the following screenshot: From there, you can drag and drop components from the sidebar directly into your code. Deco will auto-populate many of the props for you as well as add the necessary import statements. Take a look at the following code snippet: <Image style={{ width: 300, height: 200, }} resizeMode={"contain"} source={{uri:'https://unsplash.it/600/400/?random'}}/> The other nice feature Deco adds is the ability to easily launch your app from the toolbar in any installed iOS simulator or Android AVD. You don't even need to first manually open the AVD, Deco will do it all for you. Refer to the following screenshot: Currently, creating a new project with Deco starts you off with an outdated version of React Native (version 0.27.2 as of this writing). If you're not concerned with using the latest version, Deco is a great way to get a React Native app up quickly. However, if you require more advanced tooling, I suggest you look at Atom with Nuclide or Visual Studio Code with the React Native Tools extension. Taking React Native beyond iOS and Android The development experience is one of the most highly touted features by React Native proponents. But as we well know by now, React Native is more than just a great development experience. It's also about building cross-platform applications with a common language and, often times, reusable code and components. Out of the box, the Facebook team has provided tremendous support for iOS and Android. And thanks to the community, React Native has expanded to other promising platforms. In this section, I'll take you through a few of these React Native projects. I won't go into great technical depth, but I'll provide a high-level overview and how to get each running. Introducing React Native Web React Native Web is an interesting one. It treats many of React Native components you've learned about, such as View, Text, and TextInput, as higher level abstractions that map to HTML elements, such as div, span, and input, thus allowing you to build a web app that runs in a browser from your React Native code. Now if you're like me, your initial reaction might be—But why? We already have React for the web. It's called... React! However, where React Native Web shines over React is in its ability to share components between your mobile app and the web because you're still working with the same basic React Native APIs. Learn more about React Native Web at https://github.com/necolas/react-native-web. Configuring React Native Web React Native Web can be installed into your existing React Native project just like any other npm dependency: npm install --save react react-native-web Depending on the version of React Native and React Native Web you've installed, you may encounter conflicting peer dependencies of React. This may require manually adjusting which version of React Native or React Native Web is installed. Sometimes, just deleting the node_modules folder and rerunning npm install does the trick. From there, you'll need some additional tools to build the web bundle. In this example, we'll use webpack and some related tooling: npm install webpack babel-loader babel-preset-react babel-preset-es2015 babel-preset-stage-1 webpack-validator webpack-merge --save npm install webpack-dev-server --save-dev Next, create a webpack.config.js in the root of the project: const webpack = require('webpack'); const validator = require('webpack-validator'); const merge = require('webpack-merge'); const target = process.env.npm_lifecycle_event; let config = {}; const commonConfig = { entry: { main: './index.web.js' }, output: { filename: 'app.js' }, resolve: { alias: { 'react-native': 'react-native-web' } }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel', query: { presets: ['react', 'es2015', 'stage-1'] } } ] } }; switch(target) { case 'web:prod': config = merge(commonConfig, { devtool: 'source-map', plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify('production') }) ] }); break; default: config = merge(commonConfig, { devtool: 'eval-source-map' }); break; } module.exports = validator(config); Add the followingtwo entries to thescriptssection ofpackage.json: "web:dev": "webpack-dev-server --inline --hot", "web:prod": "webpack -p" Next, create an index.htmlfile in the root of the project: <!DOCTYPE html> <html> <head> <title>RNNYT</title> <meta charset="utf-8" /> <meta content="initial-scale=1,width=device-width" name="viewport" /> </head> <body> <div id="app"></div> <script type="text/javascript" src="/app.js"></script> </body> </html> And, finally, add an index.web.jsfile to the root of the project: import React, { Component } from 'react'; import { View, Text, StyleSheet, AppRegistry } from 'react-native'; class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.text}>Hello World!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#efefef', alignItems: 'center', justifyContent: 'center' }, text: { fontSize: 18 } }); AppRegistry.registerComponent('RNNYT', () => App); AppRegistry.runApplication('RNNYT', { rootTag: document.getElementById('app') }); To run the development build, we'll run webpackdev server by executing the following command: npm run web:dev web:prod can be substituted to create a production ready build. While developing, you can add React Native Web specific code much like you can with iOS and Android by using Platform.OS === 'web' or by creating custom *.web.js components. React Native Web still feels pretty early days. Not every component and API is supported, and the HTML that's generated looks a bit rough for my tastes. While developing with React Native Web, I think it helps to keep the right mindset. That is, think of this as I'm building a React Native mobile app, not a website. Otherwise, you may find yourself reaching for web-specific solutions that aren't appropriate for the technology. React Native plugin for Universal Windows Platform Announced at the Facebook F8 conference in April, 2016,the React Native plugin for Universal Windows Platform (UWP)lets you author React Native apps for Windows 10, desktop Windows 10 mobile, and Xbox One. Learn more about React Native plugin for UWP at https://github.com/ReactWindows/react-native-windows. You'll need to be running Windows 10 in order to build UWP apps. You'll also need to follow the React Native documentation for configuring your Windows environment for building React Native apps. If you're not concerned with building Android on Windows, you can skip installing Android Studio. The plugin itself also has a few additional requirements. You'll need to be running at least npm 3.x and to install Visual Studio 2015 Community (not be confused with Visual Studio Code). Thankfully, the Community version is free to use. The UWP plugin docs also tell you to install the Windows 10 SDK Build 10586. However, I found it's easier to do that from within Visual Studio once we've created the app so that we can save that part for later. Configuring the React Native plugin for UWP I won't walk you through every step of the installation. The UWP plugin docs detail the process well enough. Once you've satisfied the requirements, start by creating a new React Native project as normal: react-native init RNWindows cd RNWindows Next, install and initialize the UWP plugin: npm install --save-dev rnpm-plugin-windows react-native windows Running react-native windows will actually create a windows directory inside your project containing a Visual Studio solution file. If this is your first time installing the plugin, I recommend opening the solution (.sln) file with Visual Studio 2015. Visual Studio will then ask you to download several dependencies including the latest Windows 10 SDK. Once Visual Studio has installed all the dependencies, you can run the app either from within Visual Studio or by running the following command: react-native run-windows Take a look at the following screenshot: React Native macOS Much as the name implies, React Native allows you to create macOS desktop applications using React Native. This project works a little differently than the React Native Web and the React Native plugin for UWP. As best I can tell, since React Native macOS requires its own custom CLI for creating and packaging applications, you are not able to build a macOS and mobile app from the same project. Learn more about React Native macOS at https://github.com/ptmt/react-native-macos. Configuring React Native macOS Much like you did with the React Native CLI, begin by installing the custom CLI globally by using the following command: npm install react-native-macos-cli -g Then, use it to create a new React Native macOS app by running the following command: react-native-macos init RNDesktopApp cd RNDesktopApp This will set you up with all required dependencies along with an entry point file, index.macos.js. There is no CLI command to spin up the app, so you'll need to open the Xcode project and manually run it. Run the following command: open macos/RNDesktopApp.xcodeproj The documentation is pretty limited, but there is a nice UIExplorer app that can be downloaded and run to give you a good feel for what's available. While on some level it's unfortunate your macOS app cannot live alongside your iOS and Android code, I cannot think of a use case that would call for such a thing. That said, I was delighted with how easy it was to get this project up and running. Summary I think it's fair to say that React Native is moving quickly. With a new version released roughly every two weeks, I've lost count of how many versions have passed by in the course of writing this book. I'm willing to bet React Native has probably bumped a version or two from the time you started reading this book until now. So, as much as I'd love to wrap up by saying you now know everything possible about React Native, sadly that isn't the case. References Let me leave you with a few valuable resources to continue your journey of learning and building apps with React Native: React Native Apple TV is a fork of React Native for building apps for Apple's tvOS. For more information, refer to https://github.com/douglowder/react-native-appletv. (Note that preliminary tvOS support has appeared in early versions of React Native 0.36.) React Native Ubuntu is another fork of React Native for developing React Native apps on Ubuntu for Desktop Ubuntu and Ubuntu Touch. For more information, refer to https://github.com/CanonicalLtd/react-native JS.Coach is a collection of community favorite components and plugins for all things React, React Native, Webpack, and related tools. For more information, refer to https://js.coach/react-native Exponent is described as Rails for React Native. It supports additional system functionality and UI components beyond what's provided by React Native. It will also let you build your apps without needing to touch Xcode or Android Studio. For more information, refer to https://getexponent.com/ React Native Elements is a cross-platform UI toolkit for React Native. You can think of it as Bootstrap for React Native. For more information, refer to https://github.com/react-native-community/react-native-elements The Use React Native site is how I keep up with React Native releases and news in the React Native space. For more information, refer to http://www.reactnative.com/ React Native Radio is fantastic podcast hosted by Nader Dabit and a panel of hosts that interview other developers contributing to the React Native community. For more information, refer to https://devchat.tv/react-native-radio React Native Newsletter is an occasional newsletter curated by a team of React Native enthusiasts. For more information, refer to http://reactnative.cc/ And, finally, Dotan J. Nahum maintains an amazing resource titled Awesome React Native that includes articles, tutorials, videos, and well tested components you can use in your next project. For more information, refer to https://github.com/jondot/awesome-react-native Resources for Article: Further resources on this subject: Getting Started [article] Getting Started with React [article] Understanding React Native Fundamentals [article]
Read more
  • 0
  • 0
  • 26842
article-image-django-12-e-commerce-generating-pdf-reports-python-using-reportlab
Packt
19 May 2010
6 min read
Save for later

Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab

Packt
19 May 2010
6 min read
(Read more interesting articles on Django 1.2 e-commerce here.) ReportLab is an open source project available at http://www.reportlab.org. It is a very mature tool and includes binaries for several platforms as well as source code. It also contains extension code written in C, so it's relatively fast. It is possible for ReportLab to insert PNG and GIF image files into PDF output, but in order to do so we must have the Python Imaging Library (PIL) installed. We will not require this functionality in this article, but if you need it for a future project, see the PIL documentation for setup instructions. The starting point in the ReportLab API is drawing to canvas objects. Canvas objects are exactly what they sound like: blank slates that are accessible using various drawing commands that paint graphics, images, and words. This is sometimes referred to as a low-level drawing interface because generating output is often tedious. If we were creating anything beyond basic reporting output, we would likely want to build our own framework or set of routines on top of these low-level drawing functions. Drawing to a canvas object is a lot like working with the old LOGO programming language. It has a cursor that we can move around and draw points from one position to another. Mostly, these drawing functions work with two-dimensional (x, y) coordinates to specify starting and ending positions. This two-dimensional coordinate system in ReportLab is different from the typical system used in many graphics applications. It is the standard Cartesian plane, whose origin (x and y coordinates both equal to 0) begins in the lower-left hand corner instead of the typical upper-right hand corner. This coordinate system is used in most mathematics courses, but computer graphics tools, including HTML and CSS layouts, typically use a different coordinate system, where the origin is in the upper-left. ReportLab's low-level interface also includes functions to render text to a canvas. This includes support for different fonts and colors. The text routines we will see, however, may surprise you with their relative crudeness. For example, word-wrapping and other typesetting operations are not automatically implemented. ReportLab includes a more advanced set of routines called PLATYPUS, which can handle page layout and typography. Most low-level drawing tools do not include this functionality by default (hence the name "low-level"). This low-level drawing interface is called pdfgen and is located in the reportlab.pdfgen module. The ReportLab User's Guide includes extensive information about its use and a separate API reference is also available. The ReportLab canvas object is designed to work directly on files. We can create a new canvas from an existing open file object or by simply passing in a file name. The canvas constructor takes as its first argument the filename or an open file object. For example: from reportlab.pdfgen import canvasc = canvas.Canvas("myreport.pdf") Once we obtained a canvas object, we can access the drawing routines as methods on the instance. To draw some text, we can call the drawString method: c.drawString(250, 250, "Ecommerce in Django") This command moves the cursor to coordinates (250, 250) and draws the string "Ecommerce in Django". In addition to drawing strings, the canvas object includes methods to create rectangles, lines, circles, and other shapes. Because PDF was originally designed for printed output, consideration needs to be made for page size. Page size refers to the size of the PDF document if it were to be output to paper. By default, ReportLab uses the A4 standard, but it supports most popular page sizes, including letter, the typical size used in the US. Various page sizes are defined in reportlab.lib.pagesizes. To change this setting for our canvas object, we pass in the pagesize keyword argument to the canvas constructor. from reportlab.lib.pagesizes import letterc = canvas.Canvas('myreport.pdf', pagesize=letter) Because the units passed to our drawing functions, like rect, will vary according to what page size we're using, we can use ReportLab's units module to precisely control the output of our drawing methods. Units are stored in reportlab.lib.units. We can use the inch unit to draw shapes of a specific size: from reportlab.lib.units import inchc.rect(1*inch, 1*inch, 0.5*inch, 1*inch) The above code fragment draws a rectangle, starting one inch from the bottom and one inch from the left of the page, with sides that are length 0.5 inches and one inch, as shown in the following screenshot: Not particularly impressive is it? As you can see, using the low-level library routines require a lot of work to generate very little results. Using these routines directly is tedious. They are certainly useful and required for some tasks. They can also act as building blocks for your own, more sophisticated routines. Building our own library of routines would still be a lot of work. Fortunately ReportLab includes a built-in high-level interface for creating sophisticated report documents quickly. These routines are called PLATYPUS; we mentioned them earlier when talking about typesetting text, but they can do much more. PLATYPUS is an acronym for "Page Layout and Typography Using Scripts". The PLATYPUS code is located in the reportlab.platypus module. It allows us to create some very sophisticated documents suitable for reporting systems in an e-commerce application. Using PLATYPUS means we don't have to worry about things such as page margins, font sizes, and word-wrapping. The bulk of this heavy lifting is taken care of by the high-level routines. We can, if we wish, access the low-level canvas routines. Instead we can build a document from a template object, defining and adding the elements (such as paragraphs, tables, spacers, and images) to the document container. The following example generates a PDF report listing all the products in our Product inventory: from reportlab.platypus.doctemplate import SimpleDocTemplatefrom reportlab.platypus import Paragraph, Spacerfrom reportlab.lib import sytlesdoc = SimpleDocTemplate("products.pdf")Catalog = []header = Paragraph("Product Inventory", styles['Heading1']) Catalog.append(header)style = styles['Normal']for product in Product.objects.all(): for product in Product.objects.all(): p = Paragraph("%s" % product.name, style) Catalog.append(p) s = Spacer(1, 0.25*inch) Catalog.append(s)doc.build(Catalog) The previous code generates a PDF file called products.pdf that contains a header and a list of our product inventory. The output is displayed in the accompanying screenshot.
Read more
  • 0
  • 0
  • 26836

article-image-textures-blender
Packt
22 Oct 2009
10 min read
Save for later

Textures in Blender

Packt
22 Oct 2009
10 min read
Procedural Textures vs. Bitmap Textures Blender has basically two types of textures, which are procedural textures and bitmap textures. Each one has both positive and negative points. Which one is the best will depend on your project needs. Procedural: This kind of texture is generated by the software at rendering time, just like vector lines. This means that it won't depend on any type of image file. The best thing about this type of texture is that it is resolution independent, so we can set the texture to be rendered with high resolutions with minimum loss of quality. The negative point about this kind of texture is that it's harder to get realistic textures with it. Bitmap: To use this kind of texture, we will need an image file, such as a JPEG, PNG, or TGA file. The good thing about these textures is that we can achieve very realistic materials with it quickly. On the other hand, we must find the texture file before using it. And there is more. If you are creating a high resolution render, the texture file must be big. Texture Library Do you remember the way we organized materials? We can do exactly the same thing about textures. Besides setting names and storing the Blender files to import and use again later, collecting bitmap textures is another important point. Even if you don't start right away, it's important to know where to look for textures. So here is a small list of websites that provides free texture download. http://www.blender-textures.org http://www.cgtextures.com http://blender-archi.tuxfamily.org/textures Applying Textures To use a texture, we must apply a material to an object, and then use the texture with this material. We always use the texture inside a material. For instance, to make a plane that simulates a marble floor, we have to use a texture and set up how the surface will react to light and texture, which can give the surface a proper look of marble using any texture. To do that, we must use the texture panel, which is located right next to the materials button. We can use a keyboard shortcut to open this panel: just hit F6. There is a way to add a texture in the material panel as well, with a menu called Texture. The best way to get all the options is to add the texture on the texture panel. On this panel, we will be able to see a lot of buttons, which represent the texture channels. Each one of these channels can hold a texture. The final texture will be a mix of all the channels. If we have a texture at channel 1 and another texture at channel 2, these textures will be blended and represented in the material. Before adding a new texture, we must select a channel by clicking over one of them. Usually the first channel is selected, but if you want to use another one, just click on the channel. When the channel is selected, just click the Add New button to add a new texture. The texture controls are very similar to the material controls. We can set a name for the texture at the top, or erase it if we don't want it anymore. With the selector, we can choose a previously created texture too—just click and select. Now comes the fun part. Having added a texture, we have to choose a texture type. To do that, we click on the texture type combo box. There are a lot of textures, but most of them are procedural textures and we won't use them much. The only texture type that isn't procedural is the image type. We can use textures like Clouds and Wood to create some effects and give surfaces a more complex look, or even create a grass texture with some dirt on it. But most times, the texture type that we will be using will be the Image type. Each texture has its own set of parameters to determine how it will look in the object. If we add a Wood texture, it will show the configuration parameters at the right. If we choose as texture type Clouds, the parameters showed at the right will be completely different. With the image texture type it's not different, this kind of texture has its own type of setup. This is the control panel: To show how to set up a texture, let's use an image file that represents a wood floor and a plane. We can apply the texture to this plane and set up how it's going to look, testing all the parameters. The first thing to do is assign a material to the plane, and add a texture to this material. We choose as texture type the Image option. It will show the configuration options for this kind of texture. To apply the image as a texture to the plane, just click on the Load button, situated on the Image menu. When we hit this button, we will be able to select the image file. Locate the image file and the texture will be applied. If we want to have more control over how this texture is organized and placed on the plane, we need to learn how the controls work. Every time you make any changes to the setup of a texture, these changes will be shown in the preview window; use it a lot to make good changes. Here is a list of what some of the buttons can do for the texture: UseAlpha: If the texture has an alpha channel, we have to press this button for Blender calculate the channel. An image has an alpha channel when some kind of transparency is stored in the image. For instance, a .png file with transparent background has an alpha channel. We can use this to create a texture with a logo, for a bottle, or to add an image of a tree or person to a plane. Rot90: With this option we can rotate the texture by 90 degrees. Repeat: Every texture must be distributed on the object surface, and repeating the texture in lines and columns is the default way to do that. Extended: If this button is pressed, the texture will be adjusted to fit all the object surface area. Clip: With this option, the texture will be cropped and we will be able to show only a part of it. To adjust which parts of the texture will be displayed, use the Min/Max X/Y options. Xrepeat / Yrepeat: This option determines how many times a texture is repeated, with the repeat option turned on. Normal Map: If the texture will be used to create Normal Maps, press this button. These are textures used to change the face normals of an object. Still: With this button selected, we can determine that the image used as texture is a still image. This option is marked by default. Movie: If you have to use a movie file as texture, press this button. This is very useful if we need to make something like a theatre projection screen or a tv screen. Sequence: We can use a sequence of images as texture too; just press this button. It works the same ways as with a movie file. There are a few more parameters, like the Reload button. If your texture file suffers any kind of change, we must press this button for the changes get accepted by Blender. The X button can erase this texture; use it if you need to select another image file. When we add a texture to any material, an external link is created with this file. This link can be absolute or relative. When we add a texture called "wood.png", which is located at the root of your main hard disk, like C:, a link to this texture will be created like this: "c:wood.png", so every time you open this file, the software will look for that file at that exact place. This is an absolute link, but we can use a relative link as well. For instance, when we add a texture located in the same folder as our scene, a relative link will be created. Every time we use an absolute link and we have to move the ".blend" file to another computer, the texture file must go with it. To imbue the image file with the .blend, just press the icon of gift package. To save all the textures used in a scene, just access the file menu and use the Pack Data option. It will make all the texture files embedded with the source blend file. Mapping Every time we add a texture to any object, we must choose a mapping type to set up how the texture will be applied to the object. For instance, if we have a wall and apply a wood texture, it must be placed like wallpaper. But for cylindrical or spherical objects, or even walls, we have to set up in a way that makes the texture adaptable to the topology of the surface, to avoid effects such as a stretched texture. To set this up, we use the mapping options, which are located on the Map Input menu. On this menu, we can choose between four basic mapping types which are Cube, Sphere, Flat, and Tube. If you have a wall, choose the option that matches the topology type with the model. In this case, the best choice is the Cube. Another important option here is the UV button, which allows us to use another very powerful type of texturing, based on UV Mapping. Normal Map This is a special and useful type of texture, that can change the normals of surfaces. If we have a floor and a texture of ceramic tiles, the surface can be represented with smaller details of that tiling, using this kind of a map. It's almost like modeling the tiles. But everything is created using just a normal map. To use this kind of texture, we must turn on the Nor button on the Map To menu. When this button is turned on, we can set up the Nor slider to determine the intensity of the normal displacement. It works based on the pixel color of the texture. With white pixels, the normals are not affected, and with black pixels, the normals are fully translated. If you want to optimize the normal mapping, using a special texture is much recommended. Some texture libraries even have this type of normal maps ready for use. They can be called bump maps too. Here is an example of how we can use them. We take a stone texture and a tiled texture with a white background and black lines. The stone texture is applied to the floor, and the tiled texture is used to create a tiling for the floor. The setup for that is really simple. Just apply the texture at a lower channel, and turn off the Col button for this channel. Turn on the Nor button, and this texture will affect only the normals and not the material color. Any image can be used as a normal map, but we will always get better results with a greyscale image prepared to be used as a normal map. Now, just set up the Nor intensity with the slider, and see the render. Turn on positive and turn on negativeSome of the buttons on the Map To menu can be turned on with positive and negative values. For instance, the Nor option can be turned on with one click. If we click on it again, the Nor text will turn yellow. This means that the Nor is inverted with negative values. Some other buttons may present the same option.
Read more
  • 0
  • 0
  • 26810
Modal Close icon
Modal Close icon