Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-introducing-weld-a-runtime-written-in-rust-and-llvm-for-cross-library-optimizations
Bhagyashree R
24 Sep 2019
5 min read
Save for later

Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations

Bhagyashree R
24 Sep 2019
5 min read
Weld is an open-source Rust project for improving the performance of data-intensive applications. It is an interface and runtime that can be integrated into existing frameworks including Spark, TensorFlow, Pandas, and NumPy without changing their user-facing APIs. The motivation behind Weld Data analytics applications today often require developers to combine various functions from different libraries and frameworks to accomplish a particular task. For instance, a typical Python ecosystem application selects some data using Spark SQL, transforms it using NumPy and Pandas, and trains a model with TensorFlow. This improves developers’ productivity as they are taking advantage of functions from high-quality libraries. However, these functions are usually optimized in isolation, which is not enough to achieve the best application performance. Weld aims to solve this problem by providing an interface and runtime that can optimize across data-intensive libraries and frameworks while preserving their user-facing APIs. In an interview with Federico Carrone, a Tech Lead at LambdaClass, Weld’s main contributor, Shoumik Palkar shared, “The motivation behind Weld is to provide bare-metal performance for applications that rely on existing high-level APIs such as NumPy and Pandas. The main problem it solves is enabling cross-function and cross-library optimizations that other libraries today don’t provide.” How Weld works Weld serves as a common runtime that allows libraries from different domains like SQL and machine learning to represent their computations in a common functional intermediate representation (IR). This IR is then optimized by a compiler optimizer and JIT’d to efficient machine code for diverse parallel hardware. It performs a wide range of optimizations on the IR including loop fusion, loop tiling, and vectorization. “Weld’s IR is natively parallel, so programs expressed in it can always be trivially parallelized,” said Palkar. When Weld was first introduced it was mainly used for cross-library optimizations. However, over time people have started to use it for other applications as well. It can be used to build JITs or new physical execution engines for databases or analytics frameworks, individual libraries, target new kinds of parallel hardware using the IR, and more. To evaluate Weld’s performance the team integrated it with popular data analytics frameworks including Spark, NumPy, and TensorFlow. This prototype showed up to 30x improvements over the native framework implementations. While cross library optimizations between Pandas and NumPy also improved performance by up to two orders of magnitude. Source: Weld Why Rust and LLVM were chosen for its implementation The first iteration of Weld was implemented in Scala because of its algebraic data types, powerful pattern matching, and large ecosystem. However, it did have some shortcomings. Palkar shared in the interview, “We moved away from Scala because it was too difficult to embed a JVM-based language into other runtimes and languages.” It had a managed runtime, clunky build system, and its JIT compilations were quite slow for larger programs. Because of these shortcomings the team wanted to redesign the JIT compiler, core API, and runtime from the ground up. They were in the search for a language that was fast, safe, didn’t have a managed runtime, provided a rich standard library, functional paradigms, good package manager, and great community background. So, they zeroed-in on Rust that happens to meet all these requirements. Rust provides a very minimal, no setup required runtime. It can be easily embedded into other languages such as Java and Python. To make development easier, it has high-quality packages, known as crates, and functional paradigms such as pattern matching. Lastly, it is backed by a great Rust Community. Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Explaining the reason why they chose LLVM, Palkar said in the interview, “We chose LLVM because its an open-source compiler framework that has wide use and support; we generate LLVM directly instead of C/C++ so we don’t need to rely on the existence of a C compiler, and because it improves compilation times (we don’t need to parse C/C++ code).” In a discussion on Hacker News many users listed other Weld-like projects that developers may find useful. A user commented, “Also worth checking out OmniSci (formerly MapD), which features an LLVM query compiler to gain large speedups executing SQL on both CPU and GPU.” Users also talked about Numba, an open-source JIT compiler that translates Python functions to optimized machine code at runtime with the help of the LLVM compiler library.  “Very bizarre there is no discussion of numba here, which has been around and used widely for many years, achieves faster speedups than this, and also emits an LLVM IR that is likely a much better starting point for developing a “universal” scientific computing IR than doing yet another thing that further complicates it with fairly needless involvement of Rust,” a user added. To know more about Weld, check out the full interview on Medium. Also, watch this RustConf 2019 talk by Shoumik Palkar: https://www.youtube.com/watch?v=AZsgdCEQjFo&t Other news in Programming Darklang available in private beta GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 29549

article-image-advanced-programming-with-rust
Packt Editorial Staff
23 Apr 2018
7 min read
Save for later

Perform Advanced Programming with Rust

Packt Editorial Staff
23 Apr 2018
7 min read
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. In today’s tutorial we are focusing on equipping you with recipes to programming with Rust and also help you define expressions, constants, and variable bindings. Let us get started: Defining an expression An expression, in simple words, is a statement in Rust by using which we can create logic and workflows in the program and applications. We will deep dive into understanding expressions and blocks in Rust. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow the ensuing steps: Create a file named expression.rs with the next code snippet. Declare the main function and create the variables x_val, y_val, and z_val: //  main  point  of  execution fn  main()  { //  expression let  x_val  =  5u32; //  y  block let  y_val  =  { let  x_squared  =  x_val  *  x_val; let  x_cube  =  x_squared  *  x_val; //  This  expression  will  be  assigned  to  `y_val` x_cube  +  x_squared  +  x_val }; //  z  block let  z_val  =  { //  The  semicolon  suppresses  this  expression  and  `()`  is assigned  to  `z` 2  *  x_val; }; //  printing  the  final  outcomes println!("x  is  {:?}",  x_val); println!("y  is  {:?}",  y_val); println!("z  is  {:?}",  z_val); } You should get the ensuing output upon running the code. Please refer to the following screenshot: How it works... All the statements that end in a semicolon (;) are expressions. A block is a statement that has a set of statements and variables inside the {} scope. The last statement of a block is the value that will be assigned to the variable. When we close the last statement with a semicolon, it returns () to the variable. In the preceding recipe, the first statement which is a variable named x_val , is assigned to the value 5. Second, y_val is a block that performs certain operations on the variable x_val and a few more variables, which are x_squared and x_cube that contain the squared and cubic values of the variable x_val , respectively. The variables x_squared and x_cube , will be deleted soon after the scope of the block. The block where we declare the z_val variable has a semicolon at the last statement which assigns it to the value of (), suppressing the expression. We print out all the values in the end. We print all the declared variables values in the end. Defining constants Rust provides the ability to assign and maintain constant values across the code in Rust. These values are very useful when we want to maintain a global count, such as a timer-- threshold--for example. Rust provides two const keywords to perform this activity. You will learn how to deliver constant values globally in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow these steps: Create a file named constant.rs with the next code snippet. Declare the global UPPERLIMIT using constant: //  Global  variables  are  declared  outside  scopes  of  other function const  UPPERLIMIT:  i32  =  12; Create the is_big function by accepting a single integer as input: //  function  to  check  if  bunber fn  is_big(n:  i32)  ->  bool  { //  Access  constant  in  some  function n  >  UPPERLIMIT } In the main function, call the is_big function and perform the decision-making statement: fn  main()  { let  random_number  =  15; //  Access  constant  in  the  main  thread println!("The  threshold  is  {}",  UPPERLIMIT); println!("{}  is  {}",  random_number,  if is_big(random_number)  {  "big"  }  else  {  "small" }); //  Error!  Cannot  modify  a  `const`. //  UPPERLIMIT  =  5; } You should get the following screenshot as output upon running the preceding code: How it works... The workflow of the recipe is fairly simple, where we have a function to check whether an integer is greater than a fixed threshold or not. The UPPERLIMIT variable defines the fixed threshold for the function, which is a constant whose value will not change in the code and is accessible throughout the program. We assigned 15 to random_number and passed it via is_big  (integer  value); and we then get a boolean output, either true or false, as the return type of the function is a bool type. The answer to our situation is false as 15 is not bigger than 12, which the UPPERLIMIT value set as the constant. We performed this condition checking using the if...else statement in Rust. We cannot change the UPPERLIMIT value; when attempted, it will throw an error, which is commented in the code section. Constants declare constant values. They represent a value, not a memory address: type  =  value; Performing variable bindings Variable binding refers to how a variable in the Rust code is bound to a type. We will cover pattern, mutability, scope, and shadow concepts in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Perform the following step: Create a file named binding.rs and enter a code snippet that includes declaring the main function and different variables: fn  main()  { //  Simplest  variable  binding let  a  =  5; //  pattern let  (b,  c)  =  (1,  2); //  type  annotation let  x_val:  i32  =  5; //  shadow  example let  y_val:  i32  =  8; { println!("Value  assigned  when  entering  the scope  :  {}",  y_val);  //  Prints  "8". let  y_val  =  12; println!("Value  modified  within  scope  :{}",  y_val); //  Prints  "12". } println!("Value  which  was  assigned  first  :  {}",  y_val); //  Prints  "8". let  y_val  =  42; println!("New  value  assigned  :  {}",  y_val); //Prints  "42". } You should get the following screenshot as output upon running the preceding code: How it works... The let statement is the simplest way to create a binding, where we bind a variable to a value, which is the case with variable a. To create a pattern with the let statement, we assign the pattern values to b and c values in the same pattern. Rust is a statically typed language. This means that we have to specify our types during an assignment, and at compile time, it is checked to see if it is compatible. Rust also has the type reference feature that identifies the variable type automatically at compile time. The variable_name  : type is the format we use to explicitly mention the type in Rust. We read the assignment in the following format: x_val is a binding with the type i32 and the value 5. Here, we declared x_val as a 32-bit signed integer. However, Rust has many different primitive integer types that begin with i for signed integers and u for unsigned integers, and the possible integer sizes are 8, 16, 32, and 64 bits. Variable bindings have a scope that makes the variable alive only in the scope. Once it goes out of the scope, the resources are freed. A block is a collection of statements enclosed by {}. Function definitions are also blocks! We use a block to illustrate the feature in Rust that allows variable bindings to be shadowed. This means that a later variable binding can be done with the same name, which in our case is y_val. This goes through a series of value changes, as a new binding that is currently in scope overrides the previous binding. Shadowing enables us to rebind a name to a value of a different type. This is the reason why we are able to assign new values to the immutable y_val variable in and out of the block. [box type="shadow" class="" width=""]This article is an extract taken from Rust Cookbook written by Vigneshwer Dhinakaran. You will find more than 80 practical recipes written in Rust that will allow you to use the code samples right away in your existing applications.[/box] Read More 20 ways to describe programming in 5 words Top 5 programming languages for crunching Big Data effectively    
Read more
  • 0
  • 1
  • 28050

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 27650

article-image-extension-functions-in-kotlin
Aaron Lazar
08 Jun 2018
8 min read
Save for later

Extension functions in Kotlin: everything you need to know

Aaron Lazar
08 Jun 2018
8 min read
Kotlin is a rapidly rising programming language. It offers developers the simplicity and effectiveness to develop robust and lightweight applications. Kotlin offers great functional programming support, and one of the best features of Kotlin in this respect are extension functions, hands down! Extension functions are great, because they let you modify existing types with new functions. This is especially useful when you're working with Android and you want to add extra functions to the framework classes. In this article, we'll see what Extension functions are and how the're a blessing in disguise! This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. The book bridges the language gap for Kotlin developers by showing you how to create and consume functional constructs in Kotlin. fun String.sendToConsole() = println(this) fun main(args: Array<String>) { "Hello world! (from an extension function)".sendToConsole() } To add an extension function to an existing type, you must write the function's name next to the type's name, joined by a dot (.). In our example, we add an extension function (sendToConsole()) to the String type. Inside the function's body, this refers the instance of String type (in this extension function, string is the receiver type). Apart from the dot (.) and this, extension functions have the same syntax rules and features as a normal function. Indeed, behind the scenes, an extension function is a normal function whose first parameter is a value of the receiver type. So, our sendToConsole() extension function is equivalent to the next code: fun sendToConsole(string: String) = println(string) sendToConsole("Hello world! (from a normal function)") So, in reality, we aren't modifying a type with new functions. Extension functions are a very elegant way to write utility functions, easy to write, very fun to use, and nice to read—a win-win. This also means that extension functions have one restriction—they can't access private members of this, in contrast with a proper member function that can access everything inside the instance: class Human(private val name: String) fun Human.speak(): String = "${this.name} makes a noise" //Cannot access 'name': it is private in 'Human' Invoking an extension function is the same as a normal function—with an instance of the receiver type (that will be referenced as this inside the extension), invoke the function by name. Extension functions and inheritance There is a big difference between member functions and extension functions when we talk about inheritance. The open class Canine has a subclass, Dog. A standalone function, printSpeak, receives a parameter of type Canine and prints the content of the result of the function speak(): String: open class Canine { open fun speak() = "<generic canine noise>" } class Dog : Canine() { override fun speak() = "woof!!" } fun printSpeak(canine: Canine) { println(canine.speak()) } Open classes with open methods (member functions) can be extended and alter their behavior. Invoking the speak function will act differently depending on which type is your instance. The printSpeak function can be invoked with any instance of a class that is-a Canine, either Canine itself or any subclass: printSpeak(Canine()) printSpeak(Dog()) If we execute this code, we can see this on the console: Although both are Canine, the behavior of speak is different in both cases, as the subclass overrides the parent implementation. But with extension functions, many things are different. As with the previous example, Feline is an open class extended by the Cat class. But speak is now an extension function: open class Feline fun Feline.speak() = "<generic feline noise>" class Cat : Feline() fun Cat.speak() = "meow!!" fun printSpeak(feline: Feline) { println(feline.speak()) } Extension functions don't need to be marked as override, because we aren't overriding anything: printSpeak(Feline()) printSpeak(Cat() If we execute this code, we can see this on the console: In this case, both invocations produce the same result. Although in the beginning, it seems confusing, once you analyze what is happening, it becomes clear. We're invoking the Feline.speak() function twice; this is because each parameter that we pass is a Feline to the printSpeak(Feline) function: open class Primate(val name: String) fun Primate.speak() = "$name: <generic primate noise>" open class GiantApe(name: String) : Primate(name) fun GiantApe.speak() = "${this.name} :<scary 100db roar>" fun printSpeak(primate: Primate) { println(primate.speak()) } printSpeak(Primate("Koko")) printSpeak(GiantApe("Kong")) If we execute this code, we can see this on the console: In this case, it is still the same behavior as with the previous examples, but using the right value for name. Speaking of which, we can reference name with name and this.name; both are valid. Extension functions as members Extension functions can be declared as members of a class. An instance of a class with extension functions declared is called the dispatch receiver. The Caregiver open class internally defines, extension functions for two different classes, Feline and Primate: open class Caregiver(val name: String) { open fun Feline.react() = "PURRR!!!" fun Primate.react() = "*$name plays with ${this@Caregiver.name}*" fun takeCare(feline: Feline) { println("Feline reacts: ${feline.react()}") } fun takeCare(primate: Primate){ println("Primate reacts: ${primate.react()}") } } Both extension functions are meant to be used inside an instance of Caregiver. Indeed, it is a good practice to mark member extension functions as private, if they aren't open. In the case of Primate.react(), we are using the name value from Primate and the name value from Caregiver. To access members with a name conflict, the extension receiver (this) takes precedence and to access members of the dispatcher receiver, the qualified this syntax must be used. Other members of the dispatcher receiver that don't have a name conflict can be used without qualified this. Don't get confused by the various means of this that we have already covered: Inside a class, this means the instance of that class Inside an extension function, this means the instance of the receiver type like the first parameter in our utility function with a nice syntax: class Dispatcher { val dispatcher: Dispatcher = this fun Int.extension(){ val receiver: Int = this val dispatcher: Dispatcher = this@Dispatcher } } Going back to our Zoo example, we instantiate a Caregiver, a Cat, and a Primate, and we invoke the function Caregiver.takeCare with both animal instances: val adam = Caregiver("Adam") val fulgencio = Cat() val koko = Primate("Koko") adam.takeCare(fulgencio) adam.takeCare(koko) If we execute this code, we can see this on the console: Any zoo needs a veterinary surgeon. The class Vet extends Caregiver: open class Vet(name: String): Caregiver(name) { override fun Feline.react() = "*runs away from $name*" } We override the Feline.react() function with a different implementation. We are also using the Vet class's name directly, as the Feline class doesn't have a property name: val brenda = Vet("Brenda") listOf(adam, brenda).forEach { caregiver -> println("${caregiver.javaClass.simpleName} ${caregiver.name}") caregiver.takeCare(fulgencio) caregiver.takeCare(koko) } After which, we get the following output: Extension functions with conflicting names What happens when an extension function has the same name as a member function? The Worker class has a function work(): String and a private function rest(): String. We also have two extension functions with the same signature, work and rest: class Worker { fun work() = "*working hard*" private fun rest() = "*resting*" } fun Worker.work() = "*not working so hard*" fun <T> Worker.work(t:T) = "*working on $t*" fun Worker.rest() = "*playing video games*" Having extension functions with the same signature isn't a compilation error, but a warning: Extension is shadowed by a member: public final fun work(): String It is legal to declare a function with the same signature as a member function, but the member function always takes precedence, therefore, the extension function is never invoked. This behavior changes when the member function is private, in this case, the extension function takes precedence. It is also possible to overload an existing member function with an extension function: val worker = Worker() println(worker.work()) println(worker.work("refactoring")) println(worker.rest()) On execution, work() invokes the member function and work(String) and rest() are extension functions: Extension functions for objects In Kotlin, objects are a type, therefore they can have functions, including extension functions (among other things, such as extending interfaces and others). We can add a buildBridge extension function to the object, Builder: object Builder { } fun Builder.buildBridge() = "A shinny new bridge" We can include companion objects. The class Designer has two inner objects, the companion object and Desk object: class Designer { companion object { } object Desk { } } fun Designer.Companion.fastPrototype() = "Prototype" fun Designer.Desk.portofolio() = listOf("Project1", "Project2") Calling this functions works like any normal object member function: Designer.fastPrototype() Designer.Desk.portofolio().forEach(::println) So there you have it! You now know how to take advantage of extension functions in Kotlin. If you found this tutorial helpful and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Forget C and Java. Learn Kotlin: the next universal programming language 5 reasons to choose Kotlin over Java Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 27625

article-image-behavior-scripting-in-c-and-javascript-for-game-developers
Packt Editorial Staff
16 Apr 2018
16 min read
Save for later

Behavior Scripting in C# and Javascript for game developers

Packt Editorial Staff
16 Apr 2018
16 min read
The common idea about game behaviors - things like enemy AI, or sequences of events, or the rules of a puzzle – are expressed in a scripting language, probably in a simple top-to-bottom recipe form, without using objects or much branching. Behaviour scripts are often associated with an object instance in game code – expressed in an object-oriented language such as C++ or C# – which does the work. In today’s post, we will introduce you to new classes and behavior scripts. The details of a new C# behavior and a new JavaScript behavior are also covered. We will further explore: Wall attack Declaring public variables Assigning scripts to objects Moving the camera To take your first steps into programming, we will look at a simple example of the same functionality in both C# and JavaScript, the two main programming languages used by Unity developers. It is also possible to write Boo-based scripts, but these are rarely used except by those with existing experience in the language. To follow the next steps, you may choose either JavaScript or C#, and then continue with your preferred language. To begin, click on the Create button on the Project panel, then choose either JavaScript or C# script, or simply click on the Add Component button on the Main CameraInspector panel. Your new script will be placed into the Project panel named NewBehaviourScript, and will show an icon of a page with either JavaScript or C# written on it. When selecting your new script, Unity offers a preview of what is already in the script, in the view of the Inspector, and an accompanying Edit button that when clicked on will launch the script into the default script editor, MonoDevelop. You can also launch a script in your script editor at any time by double-clicking on its icon in the Project panel. New behaviour script or class New scripts can be thought of as a new class in Unity terms. If you are new to programming, think of a class as a set of actions, properties, and other stored information that can be accessed under the heading of its name. For example, a class called Dogmay contain properties such as color, breed, size, or genderand have actions such as rolloveror fetchStick. These properties can be described as variables, while the actions can be written in functions, also known as methods. In this example, to refer to the breedvariable, a property of the Dogclass, we might refer to the class it is in, Dog, and use a period (full stop) to refer to this variable, in the following way: Dog.breed; If we want to call a function within the Dogclass, we might say, for example, the following: Dog.fetchStick(); We can also add arguments into functions-these aren't the everyday arguments we have with one another! Think of them as more like modifying the behavior of a function, for example, with our fetchStickfunction, we might build in an argument that defines how quickly our dog will fetch the stick. This might be called as follows: Dog.fetchStick(25); While these are abstract examples, often it can help to transpose coding into commonplace examples in order to make sense of them. As we continue, think back to this example or come up with some examples of your own, to help train yourself to understand classes of information and their properties. When you write a script in C# or JavaScript, you are writing a new class or classes with their own properties (variables) and instructions (functions) that you can call into play at the desired moment in your games. What's inside a new C# behaviour When you begin with a new C# script, Unity gives you the following code to get started: usingUnityEngine; usingSystem.Collections; publicclassNewBehaviourScript:MonoBehaviour{ //UsethisforinitializationvoidStart(){ } //UpdateiscalledonceperframevoidUpdate(){ } } This begins with the necessary two calls to the Unity Engine itself: usingUnityEngine; usingSystem.Collections; It goes on to establish the class named after the script. With C#, you'll be required to name your scripts with matching names to the class declared inside the script itself. This is why you will see publicclassNewBehaviourScript:MonoBehaviour{at the beginning of a new C# document, as NewBehaviourScriptis the default name that Unity gives to newly generated scripts. If you rename your script in the Project panel when it is created, Unity will rewrite the class name in your C# script. Code in classes When writing code, most of your functions, variables, and other scripting elements will be placed within the class of a script in C#. Within-in this context-means that it must occur after the class declaration, and following the corresponding closing }of that, at the bottom of the script. So, unless told otherwise, while following the instructions, assume that your code should be placed within the class established in the script. In JavaScript, this is less relevant as the entire script is the class; it is not explicitly established. Basic functions Unity as an engine has many of its own functions that can be used to call different features of the game engine, and it includes two important ones when you create a new script in C#. Functions (also known as methods) most often start with the voidterm in C#. This is the function's return type, which is the kind of data a function may result in. As most functions are simply there to carry out instructions rather than return information, often you will see voidat the beginning of their declaration, which simply means that a certain type of data will not be returned. Some basic functions are explained as follows: Start(): This is called when the scene first launches, so it is often used as it is suggested in the code, for initialization. For example, you may have a core variable that must be set to 0when the game scene begins or perhaps a function that spawns your player character in the correct place at the start of a level. Update(): This is called in every frame that the game runs, and is crucial for checking the state of various parts of your game during this time, as many different conditions of game objects may change while the game is running. Variables in C# To store information in a variable in C#, you will use the following syntax: typeOfDatanameOfVariable=value; Consider the following example: intcurrentScore=5; Another example would be: floatcurrentVelocity=5.86f; Note that the examples here show numerical data, with intmeaning integer, that is, a whole number, and floatmeaning floating point, that is, a number with a decimal place, which in C# requires a letter fto be placed at the end of the value. This syntax is somewhat different from JavaScript. Refer to the Variables in JavaScript section. What's inside a new JavaScript behaviour? While fulfilling the same functions as a C# file, a new empty JavaScript file shows you less as the entire script itself is considered to be the class, and the empty space in the script is considered to be within the opening and closing of the class, as the class declaration itself is hidden. You will also note that the lines usingUnityEngine;and usingSystem. Collections;are also hidden in JavaScript, so in a new JavaScript, you will simply be shown the Update()function: functionUpdate(){ } You will note that in JavaScript, you declare functions differently, using the term functionbefore the name. You will also need to write a declaration of variables and various other scripted elements with a slightly different syntax. We will look at examples of this as we progress. Variables in JavaScript The syntax for variables in JavaScript works as follows, and is always preceded by the prefix var, as shown: varvariableName:TypeOfData=value; For example: varcurrentScore:int=0; Another example is: varcurrentVelocity:float=5.86; As you must have noticed, the floatvalue does not require a letter ffollowing its value as it does in C#. You will notice as you see further, comparing the scripts written in the two different languages that C# often has stricter rules about how scripts are written, especially regarding implicitly stating types of data that are being used. Comments In both C# and JavaScript in Unity, you can write comments using: //twoforwardslashessymbolsforasinglelinecomment Another way of doing this would be: /*forward-slash,startoopenamultilinecommentsandattheendofit,star,forward-slashtoclose*/ You may write comments in the code to help you remember what each part does as you progress. Remember that because comments are not executed as code, you can write whatever you like, including pieces of code. As long as they are contained within a comment they will never be treated as working code. Wall attack Now let's put some of your new scripting knowledge into action and turn our existing scene into an interactive gameplay prototype. In the Project panel in Unity, rename your newly created script Shooterby selecting it, pressing return (Mac) or F2 (Windows), and typing in the new name. If you are using C#, remember to ensure that your class declaration inside the script matches this name of the script: publicclassShooter:MonoBehaviour{ As mentioned previously, JavaScript users will not need to do this. To kick-start your knowledge of using scripting in Unity, we will write a script to control the camera and allow shooting of a projectile at the wall that we have built. To begin with, we will establish three variables: bullet: This is a variable of type Rigidbody, as it will hold a reference to a physics controlled object we will make power: This is a floating point variable number we will use to set the power of shooting moveSpeed: This is another floating point variable number we will use to define the speed of movement of the camera using the arrow keys These variables must be public member variables, in order for them to display as adjustable settings in the Inspector. You'll see this in action very shortly! Declaring public variables Public variables are important to understand as they allow you to create variables that will be accessible from other scripts-an important part of game development as it allows for simpler inter-object communication. Public variables are also really useful as they appear as settings you can adjust visually in the Inspector once your script is attached to an object. Private variables are the opposite-designed to be only accessible within the scope of the script, class, or function they are defined within, and do not appear as settings in the Inspector. C# Before we begin, as we will not be using it, remove the Start()function from this script by deleting voidStart(){}. To establish the required variables, put the following code snippet into your script after the opening of the class, shown as follows: usingUnityEngine; usingSystem.Collections; publicclassShooter:MonoBehaviour{ publicRigidbodybullet;publicfloatpower=1500f;publicfloatmoveSpeed=2f; voidUpdate(){ } } Note that in this example, the default explanatory comments and the Start()function have been removed in order to save space. JavaScript In order to establish public member variables in JavaScript, you will need to simply ensure that your variables are declared outside of any existing function. This is usually done at the top of the script, so to declare the three variables we need, add the following to the top of your new Shooterscript so that it looks like this: varbullet:Rigidbody;varpower:float=1500;varmoveSpeed:float=5;functionUpdate(){ } Note that JavaScript (UnityScript) is much less declarative and needs less typing to start. Assigning scripts to objects In order for this script to be used within our game it must be attached as a component of one of the game objects within the existing scene. Save your script by choosing File | Save from the top menu of your script editor and return to Unity. There are several ways to assign a script to an object in Unity: Drag it from the Project panel and drop it onto the name of an object in the Hierarchy panel. Drag it from the Project panel and drop it onto the visual representation of the object in the Scene panel. Select the object you wish to apply the script to and then drag and drop the script to empty space at the bottom of the Inspector view for that object. Select the object you wish to apply the script to and then choose Component | Scripts | and the name of your script from the top menu. The most common method is the first approach, and this would be most appropriate since trying to drag to the camera in the Scene View, for example, would be difficult as the camera itself doesn't have a tangible surface to drag to. For this reason, drag your new Shooterscript from the Project panel and drop it onto the name of Main Camera in the Hierarchy to assign it, and you should see your script appear as a new component, following the existing audio listener component. You will also see its three public variables such as bullet, power, and moveSpeedin the Inspector, as follows: You can alternatively act in the Inspector, directly, press the Add Component button, and look for Shooterby typing in the search box. Note, this is valid if you didn't add the component in this way initially. In that case, the Shootercomponent will already be attached to the camera GameObject. As you will see, Unity has taken the variable names and given them capital letters, and in the case of our moveSpeedvariable, it takes a capital letter in the middle of the phrase to signify the start of a new word in the Inspector, placing a space between the two words when seen as a public variable. You can also see here that the bulletvariable is not yet set, but it is expecting an object to be assigned to it that has a Rigidbody attached-this is often referred to as being a Rigidbody object. Despite the fact that, in Unity, all objects in the scene can be referred to as game objects, when describing an object as a Rigidbodyobject in scripting, we will only be able to refer to properties and functions of the Rigidbodyclass. This is not a problem however; it simply makes our script more efficient than referring to the entire GameObjectclass. For more on this, take a look at the script reference documentation for both the classes: GameObject Rigidbody Beware that when adjusting values of public variables in the Inspector, any values changed will simply override those written in the script, rather than replacing them. Let's continue working on our script and add some interactivity; so, return to your script editor now. Moving the camera Next, we will make use of the moveSpeedvariable combined with keyboard input in order to move the camera and effectively create a primitive aiming of our shot, as we will use the camera as the point to shoot from. As we want to use the arrow keys on the keyboard, we need to be aware of how to address them in the code first. Unity has many inputs that can be viewed and adjusted using the Input Manager-choose Edit | Project Settings | Input: As seen in this screenshot, two of the default settings for input are Horizontal and Vertical. These rely on an axis-based input that, when holding the Positive Button, builds to a value of 1, and when holding the Negative Button, builds to a value of -1. Releasing either button means that the input's value springs back to 0, as it would if using a sprung analog joystick on a gamepad. As input is also the name of a class, and all named elements in the Input Manager are axes or buttons, in scripting terms, we can simply use: Input.GetAxis("Horizontal"); This receives the current value of the horizontal keys, that is, a value between -1 and 1, depending upon what the user is pressing. Let's put that into practice in our script now, using local variables to represent our axes. By doing this, we can modify the value of this variable later using multiplication, taking it from a maximum value of 1 to a higher number, allowing us to move the camera faster than 1 unit at a time. This variable is not something that we will ever need to set inside the Inspector, as Unity is assigning values based on our key input. As such, these values can be established as local variables. Local, private, and public variables Before we continue, let's take an overview of local, private, and public variables in order to cement your understanding: Local variables: These are variables established inside a function; they will not be shown in the Inspector, and are only accessible to the function they are in. Private variables: These are established outside a function, and therefore accessible to any function within your class. However, they are also not visible in the Inspector. Public variables: These are established outside a function, are accessible to any function in their class and also to other scripts, apart from being visible for editing in the Inspector. Local variables and receiving input The local variables in C# and JavaScript are shown as follows: C# Here is the code for C#: voidUpdate(){floath=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;floatv=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; JavaScript Here is the code for JavaScript: functionUpdate(){varh:float=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;varv:float=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; The variables declared here-hfor Horizontaland vfor Vertical, could be named anything we like; it is simply quicker to write single letters. Generally speaking, we would normally give these a name, because some letters cannot be used as variable names, for example, x, y, and z, because they are used for coordinate values and therefore reserved for use as such. As these axes' values can be anything from -1 to 1, they are likely to be a number with a decimal place, and as such, we must declare them as floating point type variables. They are then multiplied using the *symbol by Time.deltaTime, which simply means that the value is divided by the number of frames per second (the deltaTimeis the time it takes from one frame to the next or the time taken since the Update()function last ran), which means that the value adds up to a consistent amount per second, regardless of the framerate. The resultant value is then increased by multiplying it by the public variable we made earlier, moveSpeed. This means that although the values of hand vare local variables, we can still affect them by adjusting public moveSpeedin the Inspector, as it is a part of the equation that those variables represent. This is a common practice in scripting as it takes advantage of the use of publicly accessible settings combined with specific values generated by a function. [box type="note" align="" class="" width=""]You read an excerpt from the book Unity 5.x Game Development Essentials, Third Edition written by Tommaso Lintrami. Unity is the most popular game engine among Indie developers, start-ups, and medium to large independent game development companies. This book is a complete exercise in game development covering environments, physics, sound, particles, and much more—to get you up and running with Unity rapidly.[/box] Scripting Strategies Unity 3.x Scripting-Character Controller versus Rigidbody
Read more
  • 0
  • 0
  • 26884

article-image-github-universe-2019-github-for-mobile-github-archive-program-and-more-announced-amid-protests-against-githubs-ice-contract
Vincy Davis
14 Nov 2019
4 min read
Save for later

GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract

Vincy Davis
14 Nov 2019
4 min read
Yesterday, GitHub commenced its popular product conference GitHub Universe 2019 in San Francisco. The two-day annual conference celebrates the contribution of GitHub’s 40+ million developers and their contributions to the open source community. Day 1 of the conference had many interesting announcements like GitHub for mobile, GitHub Archive Program, and more. Let’s look at some of the major announcements at the GitHub Universe 2019 conference. GitHub for mobile iOS (beta) Github for mobile is a beta app that aims to give users the flexibility to work and interact with the team, anywhere they want. This will enable users to share feedback on a design discussion or review codes in a non-complex development environment. This native app will adapt to any screen size and will also work in dark mode based on the device preference. Currently available only on iOS, the GitHub team has said that it will soon come up with the Android version of it. https://twitter.com/italolelis/status/1194929030518255616 https://twitter.com/YashSharma___/status/1194899905552105472 GitHub Archive Program “Our world is powered by open source software. It’s a hidden cornerstone of our civilization and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve it for generations to come,” states the official GitHub blog. GitHub has partnered with the Stanford Libraries, the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Piql, Microsoft Research, and the Bodleian Library to preserve all the available open source code in the world. It will safeguard all the data by storing multiple copies across various data formats and locations. This includes a “very-long-term archive” called the GitHub Arctic Code Vault which is designed to last at least 1,000 years. https://twitter.com/vithalreddy/status/1194846571835183104 https://twitter.com/sonicbw/status/1194680722856042499 Read More: GitHub Satellite 2019 focuses on community, security, and enterprise Automating workflows from code to cloud General availability of GitHub Actions Last year, at the GitHub Universe conference, GitHub Actions was announced in beta. This year, GitHub has made it generally available to all the users. In the past year, GitHub Actions has received contributions from the developers of AWS, Google, and others. Actions has now developed as a new standard for building and sharing automation for software development, including a CI/CD solution and native package management.GitHub has also announced the free use of self-hosted runners and artifact caching. https://twitter.com/qmixi/status/1194379789483704320 https://twitter.com/inversemetric/status/1194668430290345984 General availability of GitHub Packages In May this year, GitHub had announced the beta version of the GitHub Package Registry as its new package management service. Later in September, after gathering community feedback, GitHub announced that the service has proxy support for the primary npm registry. Since its launch, GitHub Package has received over 30,000 unique packages that served the needs of over 10,000 organizations. Now, at the GitHub Universe 2019, the GitHub team has announced the general availability of GitHub Packages and also informed that they have added support for using the GitHub Actions token. https://twitter.com/Chris_L_Ayers/status/1194693253532020736 These were some of the major announcements at day 1 of the GitHub Universe 2019 conference, head over to GitHub’s blog for more details of the event. Tech workers protests against GitHub’s ICE contract Major product announcements aside, one thing that garnered a lot of attention at the GitHub Universe conference was the protest conducted by the GitHub workers along with the Tech Workers Coalition to oppose GitHub’s $200,000 contract with Immigration and Customs Enforcement (ICE). Many high-profile speakers have dropped out of the GitHub Universe 2019 conference and at least five GitHub employees have resigned from GitHub due to its support for ICE. https://twitter.com/lily_dart/status/1194216293668401152 Read More: Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE Yesterday at the event, the protesting tech workers brought a giant cage to symbolize how ICE uses them to detain migrant children. https://twitter.com/githubbers/status/1194662876587233280 Tech workers around the world have extended their support to the protest against GitHub. https://twitter.com/ConMijente/status/1194665524191318016 https://twitter.com/CoralineAda/status/1194695061717450752 https://twitter.com/maybekatz/status/1194683980877975552 GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status GitHub Package Registry gets proxy support for the npm registry GitHub updates to Rails 6.0 with an incremental approach GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 26312
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-learning-dependency-injection-di
Packt
08 Mar 2018
15 min read
Save for later

Learning Dependency Injection (DI)

Packt
08 Mar 2018
15 min read
In this article by Sherwin John CallejaTragura, author of the book Spring 5.0 Cookbook, we will learn about implementation of Spring container using XML and JavaConfig,and also managing of beans in an XML-based container. In this article,you will learn how to: Implementing the Spring container using XML Implementing the Spring container using JavaConfig Managing the beans in an XML-based container (For more resources related to this topic, see here.) Implementing the Spring container using XML Let us begin with the creation of theSpring Web Project using the Maven plugin of our STS Eclipse 8.3. This web project will be implementing our first Spring 5.0 container using the XML-based technique. Thisis the most conventional butrobust way of creating the Spring container. The container is where the objects are created, managed, wired together with their dependencies, and monitored from their initialization up to their destruction.This recipe will mainly highlight how to create an XML-based Spring container. Getting ready Create a Maven project ready for development using the STS Eclipse 8.3. Be sure to have installed the correct JRE. Let us name the project ch02-xml. How to do it… After creating the project, certain Maven errors will be encountered. Bug fix the Maven issues of our ch02-xml projectin order to use the XML-based Spring 5.0 container by performing the following steps:  Open pom.xml of the project and add the following properties which contain the Spring build version and Servlet container to utilize: <properties> <spring.version>5.0.0.BUILD-SNAPSHOT</spring.version> <servlet.api.version>3.1.0</servlet.api.version> </properties> Add the following Spring 5 dependencies inside pom.xml. These dependencies are essential in providing us with the interfaces and classes to build our Spring container: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> </dependencies> It is required to add the following repositories where Spring 5.0 dependencies in Step 2 will be downloaded: <repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/libs-snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> Then add the Maven plugin for deployment but be sure to recognize web.xml as the deployment descriptor. This can be done by enabling<failOnMissingWebXml>or just deleting the<configuration>tag as follows: <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> </plugin> <plugin> Follow the Tomcat Maven plugin for deployment, as explained in Chapter 1. After the Maven configuration details, check if there is a WEB-INF folder inside src/main/webapp. If there is none, create one. This is mandatory for this project since we will be using a deployment descriptor (or web.xml). Inside theWEB-INF folder, create a deployment descriptor or drop a web.xml template inside src/main/webapp/WEB-INF directory. Then, create an XML-based Spring container named as ch02-beans.xmlinside thech02-xml/src/main/java/ directory. The configuration file must contain the following namespaces and tags: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring- context.xsd"> </beans> You can generate this file using theSTS Eclipse Wizard (Ctrl-N) and under the module SpringSpring Bean Configuration File option Save all the files. Clean and build the Maven project. Do not deploy yet because this is just a standalone project at the moment. How it works… This project just imported three major Spring 5.0 libraries, namely the Spring-Core, Spring-Beans, and Spring-Context,because the major classes and interfaces in creating the container are found in these libraries. This shows that Spring, unlike other frameworks, does not need the entire load of libraries just to setup the initial platform. Spring can be perceived as a huge enterprise framework nowadays but internally it is still lightweight. The basic container that manages objects in Spring is provided by the org.springframework.beans.factory.BeanFactoryinterfaceand can only be found in theSpring-Beansmodule. Once additional features are needed such as message resource handling, AOP capabilities, application-specific contexts and listener implementation, the sub-interface of BeanFactory, namely the org.springframework.context.ApplicationContextinterface, is then used.This ApplicationContext, found in Spring-Contextmodules, is the one that provides an enterprise-specific container for all its applications becauseit encompasses alarger scope of Spring components than itsBeanFactoryinterface. The container created,ch02-beans.xml, anApplicationContext, is an XML-based configuration that contains XSD schemas from the three main libraries imported. These schemashave tag libraries and bean properties, which areessential in managing the whole framework. But beware of runtime errors once libraries are removed from the dependencies because using these tags is equivalent to using the libraries per se. The final Spring Maven project directory structure must look like this: Implementing the Spring container using JavaConfig Another option of implementing the Spring 5.0 container is through the use of Spring JavaConfig. This is a technique that uses pure Java classes in configuring the framework's container. This technique eliminates the use of bulky and tedious XML metadata and also provides a type-safe and refactoring-free approach in configuring entities or collections of objects into the container. This recipe will showcase how to create the container usingJavaConfig in a web.xml-less approach. Getting ready Create another Maven project and name the projectch02-xml. This STSEclipse project will be using a Java class approach including its deployment descriptor. How to do it… To get rid of the usual Maven bugs, immediately open the pom.xmlof ch02-jc and add<properties>, <dependencies>, and <repositories>equivalent to what was added inthe Implementing the Spring Container using XMLrecipe. Next, get rid of the web.xml. Since the time Servlet 3.0 specification was implemented, servlet containers can now support projects without using web.xml. This is done by implementingthe handler abstract class called org.springframework.web.WebApplicationInitializer to programmatically configure ServletContext. Create aSpringWebinitializerclass and override its onStartup() method without any implementation yet: public class SpringWebinitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext container) throws ServletException { } } The lines in Step 2 will generate some runtime errors until you add the following Maven dependency: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> In pom.xml, disable the<failOnMissingWebXml>. After the Maven details, create a class named BeanConfig, the ApplicationContext definition, bearing an annotation @Configuration at the top of it. The class must be inside the org.packt.starter.ioc.contextpackage and must be an empty class at the moment: @Configuration public class BeanConfig { } Save all the files and clean and build the Maven project.How it works… The Maven project ch02-xml makes use of both JavaConfig and ServletContainerInitializer, meaning there will be no XML configuration from servlet to Spring 5.0 containers. The BeanConfigclass is the ApplicationContext of the project which has an annotation @Configuration,indicating that the class is used by JavaConfig as asource of bean definitions.This is handy when creating an XML-based configuration with lots of metadata. On the other hand,ch02-xmlimplemented org.springframework.web.WebApplicationInitializer,which is a handler of org.springframework.web.SpringServletContainerInitializer, the framework's implementation class to theservlet'sServletContainerInitializer. The SpringServletContainerInitializerisnotified byWebApplicationInitializerduring the execution of its startup(ServletContext) with regard to theprogramaticalregistration of filters, servlets, and listeners provided by the ServletContext . Eventually, the servlet container will acknowledge the status reported by SpringServletContainerInitialize,thus eliminating the use of web.xml. On Maven's side, the plugin for deployment must be notified that the project will not use web.xml.This is done through setting the<failOnMissingWebXml>to false inside its<configuration>tag. The final Spring Web Project directory structure must look like the following structure: Managing the beans in an XML-based container Frameworks become popular because of the principle behind the architecture they are made up from. Each framework is built from different design patterns that manage the creation and behavior of the objects they manage. This recipe will detail how Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Getting ready The two Maven projects previously created will be utilized in illustrating how Spring 5.0 loads objects into the heap memory.We will also be utilizing the ApplicationContextrather than the BeanFactorycontainer in preparation for the next recipes involving more Spring components. How to do it… With our ch02-xml, let us demonstrate how Spring loads objects using the XML-based Application Context container: Create a package layer,org.packt.starter.ioc.model,for our model classes. Our model classes will be typical Plain Old Java Objects(POJO),by which Spring 5.0 architecture is known for. Inside the newly created package, create the classes Employeeand Department,whichcontain the following blueprints: public class Employee { private String firstName; private String lastName; private Date birthdate; private Integer age; private Double salary; private String position; private Department dept; public Employee(){ System.out.println(" an employee is created."); } public Employee(String firstName, String lastName, Date birthdate, Integer age, Double salary, String position, Department dept) { his.firstName = firstName; his.lastName = lastName; his.birthdate = birthdate; his.age = age; his.salary = salary; his.position = position; his.dept = dept; System.out.println(" an employee is created."); } // getters and setters } public class Department { private Integer deptNo; private String deptName; public Department() { System.out.println("a department is created."); } // getters and setters } Afterwards, open the ApplicationContextch02-beans.xml. Register using the<bean>tag our first set of Employee and Department objects as follows: <bean id="empRec1" class="org.packt.starter.ioc.model.Employee" /> <bean id="dept1" class="org.packt.starter.ioc.model.Department" /> The beans in Step 3 containprivate instance variables that havezeroes and null default values. Toupdate them, our classes havemutators or setter methodsthat can be used to avoid NullPointerException, which happens always when we immediately use empty objects. In Spring,calling these setters is tantamount to injecting data into the<bean>,similar to how these following objectsare created: <bean id="empRec2" class="org.packt.starter.ioc.model.Employee"> <property name="firstName"><value>Juan</value></property> <property name="lastName"><value>Luna</value></property> <property name="age"><value>70</value></property> <property name="birthdate"><value>October 28, 1945</value></property> <property name="position"> <value>historian</value></property> <property name="salary"><value>150000</value></property> <property name="dept"><ref bean="dept2"/></property> </bean> <bean id="dept2" class="org.packt.starter.ioc.model.Department"> <property name="deptNo"><value>13456</value></property> <property name="deptName"> <value>History Department</value></property> </bean> A<property>tag is equivalent to a setter definition accepting an actual value oran object reference. The nameattributedefines the name of the setter minus the prefix set with the conversion to itscamel-case notation. The value attribute or the<value>tag both pertain to supported Spring-type values (for example,int, double, float, Boolean, Spring). The ref attribute or<ref>provides reference to another loaded<bean>in the container. Another way of writing the bean object empRec2 is through the use of ref and value attributes such as the following: <bean id="empRec3" class="org.packt.starter.ioc.model.Employee"> <property name="firstName" value="Jose"/> <property name="lastName" value="Rizal"/> <property name="age" value="101"/> <property name="birthdate" value="June 19, 1950"/> <property name="position" value="scriber"/> <property name="salary" value="90000"/> <property name="dept" ref="dept3"/> </bean> <bean id="dept3" class="org.packt.starter.ioc.model.Department"> <property name="deptNo" value="56748"/> <property name="deptName" value="Communication Department" /> </bean> Another way of updating the private instance variables of the model objects is to make use of the constructors. Actual Spring data and object references can be inserted to the through the metadata: <bean id="empRec5" class="org.packt.starter.ioc.model.Employee"> <constructor-arg><value>Poly</value></constructor-arg> <constructor-arg><value>Mabini</value></constructor-arg> <constructor-arg><value> August 10, 1948</value></constructor-arg> <constructor-arg><value>67</value></constructor-arg> <constructor-arg><value>45000</value></constructor-arg> <constructor-arg><value>Linguist</value></constructor-arg> <constructor-arg><ref bean="dept3"></ref></constructor-arg> </bean> After all the modifications, save ch02-beans.xml.Create a TestBeans class inside thesrc/test/java directory. This class will load the XML configuration resource to the ApplicationContext container throughorg.springframework.context.support.ClassPathXmlApplicationContextand fetch all the objects created through its getBean() method. public class TestBeans { public static void main(String args[]){ ApplicationContext context = new ClassPathXmlApplicationContext("ch02-beans.xml"); System.out.println("application context loaded."); System.out.println("****The empRec1 bean****"); Employee empRec1 = (Employee) context.getBean("empRec1"); System.out.println("****The empRec2*****"); Employee empRec2 = (Employee) context.getBean("empRec2"); Department dept2 = empRec2.getDept(); System.out.println("First Name: " + empRec2.getFirstName()); System.out.println("Last Name: " + empRec2.getLastName()); System.out.println("Birthdate: " + empRec2.getBirthdate()); System.out.println("Salary: " + empRec2.getSalary()); System.out.println("Dept. Name: " + dept2.getDeptName()); System.out.println("****The empRec5 bean****"); Employee empRec5 = context.getBean("empRec5", Employee.class); Department dept3 = empRec5.getDept(); System.out.println("First Name: " + empRec5.getFirstName()); System.out.println("Last Name: " + empRec5.getLastName()); System.out.println("Dept. Name: " + dept3.getDeptName()); } } The expected output after running the main() thread will be: an employee is created. an employee is created. a department is created. an employee is created. a department is created. an employee is created. a department is created. application context loaded. *********The empRec1 bean *************** *********The empRec2 bean *************** First Name: Juan Last Name: Luna Birthdate: Sun Oct 28 00:00:00 CST 1945 Salary: 150000.0 Dept. Name: History Department *********The empRec5 bean *************** First Name: Poly Last Name: Mabini Dept. Name: Communication Department How it works… The principle behind creating<bean>objects into the container is called the Inverse of Control design pattern. In order to use the objects, its dependencies, and also its behavior, these must be placed within the framework per se. After registering them in the container, Spring will just take care of their instantiation and their availability to other objects. Developer can just "fetch" them if they want to include them in their software modules,as shown in the following diagram: The IoC design pattern can be synonymous to the Hollywood Principle (“Don't call us, we’ll call you!”), which is a popular line in most object-oriented programming languages. The framework does not care whether the developer needs the objects or not because the lifespan of the objects lies on the framework's rules. In the case of setting new values or updating values of the object's private variables, IoC has an implementation which can be used for "injecting" new actual values or object references to and it is popularly known as the Dependency Injection(DI) design pattern. This principle exposes all the to the public through its setter methods or the constructors. Injecting Spring values and object references to the method signature using the <property>tag without knowing its implementation is called the Method Injection type of DI. On the other hand, if we create the bean with initialized values injected to its constructor through<constructor-arg>, it is known as Constructor Injection. To create the ApplicationContext container, we need to instantiate ClassPathXmlApplicationContext or FileSystemApplicationContext, depending on the location of the XML definition file. Since the file is found in ch02-xml/src/main/java/, ClassPathXmlApplicationContext implementation is the best option. This proves that the ApplicationContext is an object too,bearing all those XML metadata. It has several overloaded getBean() methods used to fetch all the objects loaded with it. Summary In this article we went overhow to create an XML-based Spring container, how to create the container using JavaConfig in a web.xml-less approach andhow Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 25875

article-image-microsoft-announces-the-general-availability-of-live-share-and-brings-it-to-visual-studio-2019
Bhagyashree R
03 Apr 2019
2 min read
Save for later

Microsoft announces the general availability of Live Share and brings it to Visual Studio 2019

Bhagyashree R
03 Apr 2019
2 min read
Microsoft yesterday announced that Live Share is now generally available and now comes included in Visual Studio 2019. This release comes with a lot of updates based on the feedbacks the team got since the public preview of Live Share started including a read-only mode, support for C++ and Python, and more. What is Live Share? Microsoft first introduced Live Share at Connect 2017 and launched its public preview in May 2018. It is a tool that enables your to collaborate with your team on the same codebase without needing to synchronize code or to configure the same development tools, settings, or environment. You can edit and debug your code with others in real time, regardless what programming languages you are using or the type of app you are building. It allows you to do a bunch of different things like instantly sharing your project with a teammate, share debugging sessions, terminal instances, localhost web apps, voice calls, and more. With Live Share, you do not have to leave the comfort of your favorite tools. You can take the advantage collaboration while retaining your personal editor preferences. It also provides developers their own cursor so that to enable seamless transition between following one another. What’s new in Live Share? This release includes features like  read-only mode, support for more languages like C++ and Python, and enabled guests to start debugging sessions. Now, you can use Live Share while pair programming, conducting code reviews, giving lectures and presenting to students and colleagues, or even mob programming during hackathons. This release also comes with support for a few third-party extensions to improve the overall experience when working with Live Share. The two extensions are OzCode and CodeStream. OzCode offers a suite of visualizations like datatips to see how items are passed through a LINQ query. It provides heads-up display to show how a set of boolean expressions evaluates. With CodeStream, you can create discussions about your codebase, which will serve as an integrated chat feature within a LiveShare session. To read the updates in Live Share, check out the official announcement. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud Microsoft announces Game stack with Xbox Live integration to Android and iOS  
Read more
  • 0
  • 0
  • 25463

article-image-java-refactoring-netbeans
Packt
08 Jun 2011
7 min read
Save for later

Java Refactoring in NetBeans

Packt
08 Jun 2011
7 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Be warned that many of the refactoring techniques presented in this article might break some code. NetBeans, and other IDEs for that matter too, make it easier to revert changes but of course be wary of things going wrong. With that in mind, let's dig in. Renaming elements This recipe focuses on how the IDE handles the renaming of all elements of a project, being the project itself, classes, methods, variables, and packages. How to do it... Let's create the code to be renamed: Create a new project, this can be achieved by either clicking File and then New Project or pressing Ctrl+Shift+N. On New Project window, choose Java on the Categories side, and on the Projects side select Java Application. Then click Next. Under Name and Location: name the project as RenameElements and click Finish. With the project created we will need to clear the RenameElements.java class of the main method and insert the following code: package renameelements; import java.io.File; public class RenameElements { private void printFiles(String string) { File file = new File(string); if (file.isFile()) { System.out.println(file.getPath()); } else if (file.isDirectory()) { for(String directory : file.list()) printFiles(string + file.separator + directory); } if (!file.exists()) System.out.println(string + " does not exist."); } } The next step is to rename the package, so place the cursor on top of the package name, renameelements, and press Ctrl+R. A Rename dialog pops-up with the package name. Type util under New Name and click on Refactor. Our class contains several variables we can rename: Place the cursor on the top of the String parameter named string and press Ctrl+R. Type path and press Enter Let's rename the other variables: Rename file into filePath. To rename methods, perform the steps below: Place the cursor on the top of the method declaration, printFiles, right-click it then select Refactor and Rename.... On the Rename Method dialog, under New Name enter recursiveFilePrinting and press Refactor. Then let's rename classes: To rename a class navigate to the Projects window and press Ctrl+R on the RenameElements.java file. On the Rename Class dialog enter FileManipulator and press Enter. And finally renaming an entire project: Navigate to the Project window, right-click on the project name, RenamingElements, and choose Rename.... Under Project Name enter FileSystem and tick Also Rename Project Folder; after that, click on Rename. How it works... Renaming a project works a bit differently from renaming a variable, since in this action NetBeans needs to rename the folder where the project is placed. The Ctrl+R shortcut is not enough in itself so NetBeans shows the Rename Project dialog. This emphasizes to the developer that something deeper is happening. When renaming a project, NetBeans gives the developer the possibility of renaming the folder where the project is contained to the same name of the project. This is a good practice and, more often than not, is followed. Moving elements NetBeans enables the developer to easily move classes around different projects and packages. No more breaking compatibility when moving those classes around, since all are seamlessly handled by the IDE. Getting ready For this recipe we will need a Java project and a Java class so we can exemplify how moving elements really work. The exisiting code, created in the previous recipe, is going to be enough. Also you can try doing this with your own code since moving classes are not such a complicated step that can't be undone. Let's create a project: Create a new project, which can be achieved either by clicking File and then New Project or pressing Ctrl+Shift+N. In the New Project window, choose Java on the Categories side and Java Application on the Projects side, then click Next. Under Name and Location, name the Project as MovingElements and click Finish. Now right-click on the movingelements package, select New... and Java Class.... On the New Java Class dialog enter the class name as Person. Leave all the other fields with their default values and click Finish. How to do it... Place the cursor inside of Person.java and press Ctrl+M. Select a working project from Project field. Select Source Packages in the Location field. Under the To Package field enter: classextraction: How it works... When clicking the Refactor button the class is removed from the current project and placed in the project that was selected from the dialog. The package in that class is then updated to match. Extracting a superclass Extracting superclasses enables NetBeans to add different levels of hierarchy even after the code is written. Usually, requirements changing in the middle of development, and rewriting classes to support inheritance would quite complicated and time-consuming. NetBeans enables the developer to create those superclasses in a few clicks and, by understanding how this mechanism works, even creates superclasses that extend other superclasses. Getting ready We will need to create a Project based on the Getting Ready section of the previous recipe, since it is very similar. The only change from the previous recipe is that this recipe's project name will be SuperClassExtraction. After project creation: Right-click on the superclassextraction package, select New... and Java Class.... On the New Java Class dialog enter the class name as DataAnalyzer. Leave all the other fields with their default values and click Finish. Replace the entire content of the DataAnalyzer.java with the following code: package superclassextraction; import java.util.ArrayList; public class DataAnalyzer { ArrayList<String> data; static final boolean CORRECT = true; static final boolean INCORRECT = false; private void fetchData() { //code } void saveData() { } public boolean parseData() { return CORRECT; } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } Now let's extract our superclass. How to do it... Right-click inside of the DataAnalyzer.java class, select Refactor and Extract Superclass.... When the Extract Superclass dialog appears, enter Superclass Name as Analyzer. On Members to Extract, select all members, but leave saveData out. Under the Make Abstract column select analyzeData() and leave parseData(), saveData(), fetchData() out. Then click Refactor. How it works... When the Refactor button is pressed, NetBeans copies the marked methods from DataAnalyzer.java and re-creates them in the superclass. NetBeans deals intelligently with methods marked as abstract. The abstract methods are moved up in the hierarchy and the implementation is left in the concrete class. In our example analyzeData is moved to the abstract class but marked as abstract; the real implementation is then left in DataAnalyzer. NetBeans also supports the moving of fields, in our case the CORRECT and INCORRECT fields. The following is the code in DataAnalyzer.java: public class DataAnalyzer extends Analyzer { public void saveData() { //code } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } The following is the code in Analyzer.java: public abstract class Analyzer { static final boolean CORRECT = true; static final boolean INCORRECT = false; ArrayList<String> data; public Analyzer() { } public abstract String analyzeData(ArrayList<String> data, int offset); public void fetchData() { //code } public boolean parseData() { //code return DataAnalyzer.CORRECT; } } There's more... Let's learn how to implement parent class methods. Implementing parent class methods Let's add a method to the parent class: Open Analyzer.java and enter the following code: public void clearData(){ data.clear(); } Save the file. Open DataAnalyzer.java, press Alt+Insert and select Override Method.... In the Generate Override Methods dialog select the clearData() option and click Generate. NetBeans will then override the method and add the implementation to DataAnalyzer.java: @Override public void clearData() { super.clearData(); }  
Read more
  • 0
  • 0
  • 23700

article-image-cloud-native-applications
Packt
09 Feb 2017
5 min read
Save for later

Cloud Native Applications

Packt
09 Feb 2017
5 min read
In this article by Ranga Rao Karanam, the author of the book Mastering Spring, we will see what are Cloud Native applications and Twelve Factor App. (For more resources related to this topic, see here.) Cloud Native applications Cloud is disrupting the world. A number of possibilities emerge that were never possible before. Organizations are able to provision computing, network and storage devices on demand. This has high potential to reduce costs in a number of industries. Consider the retail industry where there is high demand in pockets (Black Friday, Holiday Season and so on). Why should they pay for hardware round the year when they could provision it on demand? While we would like to be benefit from the possibilities of the cloud, these possibilities are limited by architecture and the nature of applications. How do we build applications that can be easily deployed on the cloud? That's where Cloud Native applications come into picture. Cloud Native applications are those that can easily be deployed on the cloud. These applications share a few common characteristics. We will begin with looking at Twelve Factor App - A combination of common patterns among Cloud Native applications. Twelve Factor App Twelve Factor App evolved from experiences of engineers at Heroku. This is a list of patterns that are typically used in Cloud Native application architectures. It is important to note, that an App here refers to a single deployable unit. Essentially every microservice is an App (because each microservice is independently deployable). One codebase Each App has one codebase in revision control. There can be multiple environments where the App can be deployed. However, all these environments use code from a single codebase. An example for anti-pattern is building a deployable from multiple codebases. Dependencies Explicitly declare and isolate dependencies. Typical Java applications use build management tools like Maven and Gradle to isolate and track dependencies. The following screenshot shows the typical Java applications managing dependencies using Maven: The following screenshot shows the content of the file: Config All applications have configuration that varies from one environment to another environment. Configuration is typically littered at multiple locations - Application code, property files, databases, environment variables, Java Naming and Directory Interface (JNDI) and system variables are a few examples. A Twelve Factor App should store config in the environment. While environment variables are recommended to manage configuration in a Twelve Factor App, other alternatives like having a centralized repository for application configuration should be considered for more complex systems. Irrespective of mechanism used, we recommended to manage configuration outside application code (independent of the application deployable unit). Use one standardized way of configuration Backing services Typically applications depend on other services being available - data-stores, external services among others. Twelve Factor App treats backing services as attached resources. A backing service is typically declared via an external configuration. Loose coupling to a backing service has many advantages including ability to gracefully handle an outage of a backing service. Build, release, run Strictly separate build and run stages. Build: Creates an executable bundle (ear, war or jar) from code and dependencies that can be deployed to multiple environments. Release: Combine the executable bundle with specific environment configuration to deploy in an environment. Run: Run the application in an execution environment using a specific release An anti-pattern is to build separate executable bundles specific for each environment. Stateless A Twelve Factor App does not have state. All data that it needs is stored in a persistent store. An anti-pattern is a sticky session. Port binding A Twelve Factor App exposes all services using port binding. While it is possible to have other mechanisms to expose services, these mechanisms are implementation dependent. Port binding gives full control of receiving and handling messages irrespective of where an application is deployed. Concurrency A Twelve Factor App is able to achieve more concurrency by scaling out horizontally. Scaling vertically has its limits. Scaling out horizontally provides opportunities to expand without limits. Disposability A Twelve Factor App should promote elastic scaling. Hence, they should be disposable. They can be started and stopped when needed. A Twelve Factor App should: Have minimum start up time. Long start up times means long delay before an application can take requests. Shutdown gracefully. Handle hardware failures gracefully. Environment parity All the environments - development, test, staging, and production - should be similar. They should use same processes and tools. With continuous deployment, they should have similar code very frequently. This makes finding and fixing problems easier. Logs as event streams Visibility is critical to a Twelve Factor App. Since applications are deployed on the cloud and are automatically scaled, it is important to have a centralized visibility into what's happening across different instances of the applications. Treating all logs as stream enables routing of the log stream to different destinations for viewing and archival. This stream can be used to debug issues, perform analytics and create alerting systems based on error patterns. No distinction of admin processes Twelve Factor Apps treat administrative tasks (migrations, scripts) similar to normal application processes. Summary This article thus explains about Cloud Native applications and what are Twelve Factor Apps. Resources for Article: Further resources on this subject: Cloud and Async Communication [article] Setting up of Software Infrastructure on the Cloud [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 23211
article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 23119

article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 22949

article-image-cross-validation-r-predictive-models
Pravin Dhandre
17 Apr 2018
8 min read
Save for later

Cross-validation in R for predictive models

Pravin Dhandre
17 Apr 2018
8 min read
In today’s tutorial, we will efficiently train our first predictive model, we will use Cross-validation in R as the basis of our modeling process. We will build the corresponding confusion matrix. Most of the functionality comes from the excellent caret package. You can find more information on the vast features of caret package that we will not explore in this tutorial. Before moving to the training tutorial, lets understand what a confusion matrix is. A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions. It gives you insight not only into the errors being made by your classifier but more importantly the types of errors that are being made. Training our first predictive model Following best practices, we will use Cross Validation (CV) as the basis of our modeling process. Using CV we can create estimates of how well our model will do with unseen data. CV is powerful, but the downside is that it requires more processing and therefore more time. If you can take the computational complexity, you should definitely take advantage of it in your projects. Going into the mathematics behind CV is outside of the scope of this tutorial. If interested, you can find out more information on cross validation on Wikipedia . The basic idea is that the training data will be split into various parts, and each of these parts will be taken out of the rest of the training data one at a time, keeping all remaining parts together. The parts that are kept together will be used to train the model, while the part that was taken out will be used for testing, and this will be repeated by rotating the parts such that every part is taken out once. This allows you to test the training procedure more thoroughly, before doing the final testing with the testing data. We use the trainControl() function to set our repeated CV mechanism with five splits and two repeats. This object will be passed to our predictive models, created with the caret package, to automatically apply this control mechanism within them: cv.control <- trainControl(method = "repeatedcv", number = 5, repeats = 2) Our predictive models pick for this example are Random Forests (RF). We will very briefly explain what RF are, but the interested reader is encouraged to look into James, Witten, Hastie, and Tibshirani's excellent "Statistical Learning" (Springer, 2013). RF are a non-linear model used to generate predictions. A tree is a structure that provides a clear path from inputs to specific outputs through a branching model. In predictive modeling they are used to find limited input-space areas that perform well when providing predictions. RF create many such trees and use a mechanism to aggregate the predictions provided by this trees into a single prediction. They are a very powerful and popular Machine Learning model. Let's have a look at the random forests example: Random forests aggregate trees To train our model, we use the train() function passing a formula that signals R to use MULT_PURCHASES as the dependent variable and everything else (~ .) as the independent variables, which are the token frequencies. It also specifies the data, the method ("rf" stands for random forests), the control mechanism we just created, and the number of tuning scenarios to use: model.1 <- train( MULT_PURCHASES ~ ., data = train.dfm.df, method = "rf", trControl = cv.control, tuneLength = 5 ) Improving speed with parallelization If you actually executed the previous code in your computer before reading this, you may have found that it took a long time to finish (8.41 minutes in our case). As we mentioned earlier, text analysis suffers from very high dimensional structures which take a long time to process. Furthermore, using CV runs will take a long time to run. To cut down on the total execution time, use the doParallel package to allow for multi-core computers to do the training in parallel and substantially cut down on time. We proceed to create the train_model() function, which takes the data and the control mechanism as parameters. It then makes a cluster object with the makeCluster() function with a number of available cores (processors) equal to the number of cores in the computer, detected with the detectCores() function. Note that if you're planning on using your computer to do other tasks while you train your models, you should leave one or two cores free to avoid choking your system (you can then use makeCluster(detectCores() -2) to accomplish this). After that, we start our time measuring mechanism, train our model, print the total time, stop the cluster, and return the resulting model. train_model <- function(data, cv.control) { cluster <- makeCluster(detectCores()) registerDoParallel(cluster) start.time <- Sys.time() model <- train( MULT_PURCHASES ~ ., data = data, method = "rf", trControl = cv.control, tuneLength = 5 ) print(Sys.time() - start.time) stopCluster(cluster) return(model) } Now we can retrain the same model much faster. The time reduction will depend on your computer's available resources. In the case of an 8-core system with 32 GB of memory available, the total time was 3.34 minutes instead of the previous 8.41 minutes, which implies that with parallelization, it only took 39% of the original time. Not bad right? Let's have look at how the model is trained: model.1 <- train_model(train.dfm.df, cv.control) Computing predictive accuracy and confusion matrices Now that we have our trained model, we can see its results and ask it to compute some predictive accuracy metrics. We start by simply printing the object we get back from the train() function. As can be seen, we have some useful metadata, but what we are concerned with right now is the predictive accuracy, shown in the Accuracy column. From the five values we told the function to use as testing scenarios, the best model was reached when we used 356 out of the 2,007 available features (tokens). In that case, our predictive accuracy was 65.36%. If we take into account the fact that the proportions in our data were around 63% of cases with multiple purchases, we have made an improvement. This can be seen by the fact that if we just guessed the class with the most observations (MULT_PURCHASES being true) for all the observations, we would only have a 63% accuracy, but using our model we were able to improve toward 65%. This is a 3% improvement. Keep in mind that this is a randomized process, and the results will be different every time you train these models. That's why we want a repeated CV as well as various testing scenarios to make sure that our results are robust: model.1 #> Random Forest #> #> 212 samples #> 2007 predictors #> 2 classes: 'FALSE', 'TRUE' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 2 times) #> Summary of sample sizes: 170, 169, 170, 169, 170, 169, ... #> Resampling results across tuning parameters: #> #> mtry Accuracy Kappa #> 2 0.6368771 0.00000000 #> 11 0.6439092 0.03436849 #> 63 0.6462901 0.07827322 #> 356 0.6536545 0.16160573 #> 2006 0.6512735 0.16892126 #> #> Accuracy was used to select the optimal model using the largest value. #> The final value used for the model was mtry = 356. To create a confusion matrix, we can use the confusionMatrix() function and send it the model's predictions first and the real values second. This will not only create the confusion matrix for us, but also compute some useful metrics such as sensitivity and specificity. We won't go deep into what these metrics mean or how to interpret them since that's outside the scope of this tutorial, but we highly encourage the reader to study them using the resources cited in this tutorial: confusionMatrix(model.1$finalModel$predicted, train$MULT_PURCHASES) #> Confusion Matrix and Statistics #> #> Reference #> Prediction FALSE TRUE #> FALSE 18 19 #> TRUE 59 116 #> #> Accuracy : 0.6321 #> 95% CI : (0.5633, 0.6971) #> No Information Rate : 0.6368 #> P-Value [Acc > NIR] : 0.5872 #> #> Kappa : 0.1047 #> Mcnemar's Test P-Value : 1.006e-05 #> #> Sensitivity : 0.23377 #> Specificity : 0.85926 #> Pos Pred Value : 0.48649 #> Neg Pred Value : 0.66286 #> Prevalence : 0.36321 #> Detection Rate : 0.08491 #> Detection Prevalence : 0.17453 #> Balanced Accuracy : 0.54651 #> #> 'Positive' Class : FALSE You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book gets you familiar with R’s fundamentals and its advanced features to get you hands-on experience with R’s cutting edge tools for software development. Getting Started with Predictive Analytics Here’s how you can handle the bias variance trade-off in your ML models    
Read more
  • 0
  • 0
  • 22003
article-image-whats-new-in-intellij-idea-2018-2
Sugandha Lahoti
26 Jul 2018
4 min read
Save for later

What’s new in IntelliJ IDEA 2018.2

Sugandha Lahoti
26 Jul 2018
4 min read
JetBrains has released the second version of their popular IDE for this year. IntelliJ IDEA 2018.2 is full of changes including support for Java 11, updates to version editor, user interface, JVM debugger, Gradle and more. Let’s have a quick look at all the different features and updates. Updates to Java IntelliJ IDEA 2018.2 brings support for the upcoming Java 11. The IDE now supports local-variable syntax for lambda parameters according to JEP 323. The Dataflow information can now be viewed in the editor. Quick Documentation can now be configured to pop-up together with autocompletion. Extract Method has a new preview panel to check the results of refactoring before actual changes are made. The @Contract annotation adds new return values: new, this, and paramX. The IDE also has updated its inspections and intention actions including smarter Join Line action and Stream API support, among many others. Improvements to Editor IntelliJ IDEA now underlines reassigned local variables and reassigned parameters, by default. While typing, users can use Tab to navigate outside the closing brackets or closing quotes. For or while keywords are highlighted when the caret is placed on the corresponding break or continue keyword. Changes to User Interface IntelliJ IDEA 2018.2 comes with support for the MacBook Touch Bar. Users can now use dark window headers in MacOS There are new cleaner and simpler icons on the IDE toolbar and tool windows for better readability. The IntelliJ theme on Linux has been updated to look more modern. Updates to the Version Control System The updated Files Merged with Conflicts dialog displays Git branch names and adds a new Group files by directory option. Users can now open several Log tabs in the Version Control toolwindow. The IDE now displays the Favorites branches in the Branch filter on the Log tab. While using the Commit and Push action users can either skip the Push dialog completely or only show this dialog when pushing to protected branches. The  IDE also adds support for configuring multiple GitHub accounts. Improvements in the JVM debugger IntelliJ IDEA 2018.2 includes several new breakpoint intentions actions for debugging Java projects. Users now have the ability to filter a breakpoint hit by the caller method. Changes to Gradle The included buildSrc Gradle projects are now discovered automatically. Users can now debug Gradle DSL blocks. Updates to Kotlin The Kotlin plugin bundled with the IDE has been updated to v1.2.51. Users can now run Kotlin Script scratch files and see the results right inside the editor. An intention to convert end-of-line comments into the block comments and vice versa has been added. New coroutine inspections and intentions added. Improvements in the Scala plugin The Scala plugin can show implicits right in the editor and can even show places where implicits are not found. The Scalafmt formatter has been integrated. Added updates to Semantic highlighting and improved auto-completion for pattern matching. JavaScript & TypeScript changes The newExtract React component can be used for refactoring to break a component into two. A new intention to Convert React class components into functional components added. New features can be added to an Angular app using the integration with ng add. New JavaScript and TypeScript intentions: Implement interface, Create derived class, Implement members of an interface or abstract class, Generate cases for ‘switch’, and Iterate with ‘for..of’. A new Code Coverage feature for finding the unused code in client-side apps. These are just a select few updates from the IntelliJ IDEA 2018.2 release. A complete list of all the changes can be found in the release notes. You can also read the JetBrains blog for a concise version. How to set up the Scala Plugin in IntelliJ IDE [Tutorial] Eclipse IDE’s Photon release will support Rust GitLab open sources its Web IDE in GitLab 10.7
Read more
  • 0
  • 0
  • 21118

article-image-red-hat-announces-centos-stream-a-developer-forward-distribution-jointly-with-the-centos-project
Savia Lobo
25 Sep 2019
3 min read
Save for later

Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project

Savia Lobo
25 Sep 2019
3 min read
On September 24, just after the much-awaited CentOS 8 was released, the Red Hat community in agreement with the CentOS Project announced a new model into the CentOS Linux community called, CentOS Stream. CentOS Stream is an upstream development platform for ecosystem developers. It is a single, continuous stream of content with updates several times daily, encompassing the latest and greatest from the RHEL codebase. Also, it is like having a view into what the next version of RHEL will look like, available to a much broader community than just a beta or "preview" release. Chris Wright, Red Hat's CTO, says CentOS Stream is it's "a developer-forward distribution that aims to help community members, Red Hat partners, and others take full advantage of open source innovation within a more stable and predictable Linux ecosystem. It is a parallel distribution to existing CentOS." In the previous CentOS releases developers would not know beforehand about the releases in RHEL. As the CentOS Stream project sits between the Fedora Project and RHEL in the RHEL Development process, it will provide a "rolling preview" of future RHEL kernels and features. This enables developers to stay one or two steps ahead of what’s coming in RHEL. “CentOS Stream is parallel to existing CentOS builds; this means that nothing changes for current users of CentOS Linux and services, even those that begin to explore the newly-released CentOS 8. We encourage interested users that want to be more tightly involved in driving the future of enterprise Linux, however, to transition to CentOS Stream as the new "pace-setting" distribution,” The Red Hat blog states. CentOS Stream is part of Red Hat’s broader focus to engage with communities and developers in a way that better aligns with the modern IT world. A user on Hacker News commented, “I like it, at least in theory. I develop some industrial software that runs on RHEL so being able to run somewhat similar distribution on my machine would be convenient. I tried running CentOS but it was too frustrating and limiting to deal with all the outdated packages on a dev machine. I suppose it will also be good for devs who just like the RHEL environment but don't need a super stable, outdated packages.” Another user commented, “I wonder what future Fedora will have if this new CentOS Stream will be stable enough for developer daily driver. 6 month release cycle of Fedora always felt awkwardly in-between, not having the stability of lts nor the continuity of rolling. I guess lot depends on details on how the packages flow to CentOS Stream, do they come from released Fedora versions or rawhide etc.” To know more about CentOS Stream in detail, read Red Hat’s official blog post. After RHEL 8 release, users awaiting the release of CentOS 8 After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side license adoption Red Hat announces the general availability of Red Hat OpenShift Service Mesh Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 20958
Modal Close icon
Modal Close icon