Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-18-people-in-tech-every-programmer-and-software-engineer-needs-to-follow-in-2019
Richard Gall
02 Jan 2019
9 min read
Save for later

18 people in tech every programmer and software engineer needs to follow in 2019

Richard Gall
02 Jan 2019
9 min read
After a tumultuous 2018 in tech, it's vital that you surround yourself with a variety of opinions and experiences in 2019 if you're to understand what the hell is going on. While there are thousands of incredible people working in tech, I've decided to make life a little easier for you by bringing together 18 of the best people from across the industry to follow on Twitter. From engineers at Microsoft and AWS, to researchers and journalists, this list is by no means comprehensive but it does give you a wide range of people that have been influential, interesting, and important in 2018. (A few of) the best people in tech on Twitter April Wensel (@aprilwensel) April Wensel is the founder of Compassionate Coding, an organization that aims to bring emotional intelligence and ethics into the tech industry. In April 2018 Wensel wrote an essay arguing that "it's time to retire RTFM" (read the fucking manual). The essay was well received by many in the tech community tired of a culture of ostensible caustic machismo and played a part in making conversations around community accessibility an important part of 2018. Watch her keynote at NodeJS Interactive: https://www.youtube.com/watch?v=HPFuHS6aPhw Liz Fong-Jones (@lizthegrey) Liz Fong-Jones is an SRE and Dev Advocate at Google Cloud Platform, but over the last couple of years she has become an important figure within tech activism. First helping to create the NeverAgain pledge in response to the election of Donald Trump in 2016, then helping to bring to light Google's fraught internal struggle over diversity, Fong-Jones has effectively laid the foundations for the mainstream appearance of tech activism in 2018. In an interview with Fast Company, Fong-Jones says she has accepted her role as a spokesperson for the movement that has emerged, but she's committed to helping to "equipping other employees to fight for change in their workplaces–whether at Google or not –so that I’m not a single point of failure." Ana Medina (@Ana_M_Medina) Ana Medina is a chaos engineer at Gremlin. Since moving to the chaos engineering platform from Uber (where she was part of the class action lawsuit against the company), Medina has played an important part in explaining what chaos engineering looks like in practice all around the world. But she is also an important voice in discussions around diversity and mental health in the tech industry - if you get a chance to her her talk, make sure you take the chance, and if you don't, you've still got Twitter... Sarah Drasner (@sarah_edo) Sarah Drasner does everything. She's a Developer Advocate at Microsoft, part of the VueJS core development team, organizer behind Concatenate (a free conference for Nigerian developers), as well as an author too. https://twitter.com/sarah_edo/status/1079400115196944384 Although Drasner specializes in front end development and JavaScript, she's a great person to follow on Twitter for her broad insights on how we learn and evolve as software developers. Do yourself a favour and follow her. Mark Imbriaco (@markimbriaco) Mark Imbriaco is the technical director at Epic Games. Given the company's truly, er, epic year thanks to Fortnite, Imbriaco can offer an insight on how one of the most important and influential technology companies on the planet are thinking. Corey Quinn (@QuinnyPig) Corey Quinn is an AWS expert. As the brain behind the Last Week in AWS newsletter and the voice behind the Screaming in the Cloud podcast (possibly the best cloud computing podcast on the planet), he is without a doubt the go-to person if you want to know what really matters in cloud. The range of guests that Quinn gets on the podcast is really impressive, and sums up his online persona: open, engaged, and always interesting. Yasmine Evjen (@YasmineEvjen) Yasmine Evjen is a Design Advocate at Google. That means that she is not only one of the minds behind Material Design, she is also someone that is helping to demonstrate the importance of human centered design around the world. As the presenter of Centered, a web series by the Google Design team about the ways human centered design is used for a range of applications. If you haven't seen it, it's well worth a watch. https://www.youtube.com/watch?v=cPBXjtpGuSA&list=PLJ21zHI2TNh-pgTlTpaW9kbnqAAVJgB0R&index=5&t=0s Suz Hinton (@noopkat) Suz Hinton works on IoT programs at Microsoft. That's interesting in itself, but when she's not developing fully connected smart homes (possibly), Hinton also streams code tutorials on Twitch (also as noopkat). Chris Short (@ChrisShort) If you want to get the lowdown on all things DevOps, you could do a lot worse than Chris Short. He boasts outstanding credentials - he's a CNCF ambassador, has experience with Red Hat and Ansible - but more importantly is the quality of his insights. A great place to begin is with DevOpsish, a newsletter Short produces, which features some really valuable discussions on the biggest issues and talking points in the field. Dan Abramov (@dan_abramov) Dan Abramov is one of the key figures behind ReactJS. Along with @sophiebits,@jordwalke, and @sebmarkbage, Abramov is quite literally helping to define front end development as we know it. If you're a JavaScript developer, or simply have any kind of passing interest in how we'll be building front ends over the next decade, he is an essential voice to have on your timeline. As you'd expect from someone that has helped put together one of the most popular JavaScript libraries in the world, Dan is very good at articulating some of the biggest challenges we face as developers and can provide useful insights on how to approach problems you might face, whether day to day or career changing. Emma Wedekind (@EmmaWedekind) As well as working at GoTo Meeting, Emma Wedekind is the founder of Coding Coach, a platform that connects developers to mentors to help them develop new skills. This experience makes Wedekind an important authority on developer learning. And at a time when deciding what to learn and how to do it can feel like such a challenging and complex process, surrounding yourself with people taking those issues seriously can be immensely valuable. Jason Lengstorf (@jlengstorf) Jason Lengstorf is a Developer Advocate at GatsbyJS (a cool project that makes it easier to build projects with React). His writing - on Twitter and elsewhere - is incredibly good at helping you discover new ways of working and approaching problems. Bridget Kromhout (@bridgetkromhout) Bridget Kromhout is another essential voice in cloud and DevOps. Currently working at Microsoft as Principal Cloud Advocate, Bridget also organizes DevOps Days and presents the Arrested DevOps podcast with Matty Stratton and Trevor Hess. Follow Bridget for her perspective on DevOps, as well as her experience in DevRel. Ryan Burgess (@burgessdryan) Netflix hasn't faced the scrutiny of many of its fellow tech giants this year, which means it's easy to forget the extent to which the company is at the cutting edge of technological innovation. This is why it's well worth following Ryan Burgess - as an engineering manager he's well placed to provide an insight on how the company is evolving from a tech perspective. His talk at Real World React on A/B testing user experiences is well worth watching: https://youtu.be/TmhJN6rdm28 Anil Dash (@anildash) Okay, so chances are you probably already follow Anil Dash - he does have half a million followers already, after all - but if you don't follow him, you most definitely should. Dash is a key figure in new media and digital culture, but he's not just another thought leader, he's someone that actually understands what it takes to actually build this stuff. As CEO of Glitch, a platform for building (and 'remixing') cool apps, he's having an impact on the way developers work and collaborate. 6 years ago, Dash wrote an essay called 'The Web We Lost'. In it, he laments how the web was becoming colonized by a handful of companies who built the key platforms on which we communicate and engage with one another online. Today, after a year of protest and controversy, Dash's argument is as salient as ever - it's one of the reasons it's vital that we listen to him. Jessie Frazelle (@jessfraz) Jessie Frazelle is a bit of a superstar. Which shouldn't really be that surprising - she's someone that seems to have a natural ability to pull things apart and put them back together again and have the most fun imaginable while doing it. Formerly part of the core Docker team, Frazelle now works at GitHub, where her knowledge and expertise is helping to develop the next Microsoft-tinged chapter in GitHub's history. I was lucky enough to see Jessie speak at ChaosConf in September - check out her talk: https://youtu.be/1hhVS4pdrrk Rachel Coldicutt (@rachelcoldicutt) Rachel Coldicutt is the CEO of Doteveryone, a think tank based in the U.K. that champions responsible tech. If you're interested in how technology interacts with other aspects of society and culture, as well as how it is impacting and being impacted by policymakers, Coldicutt is a vital person to follow. Kelsey Hightower (@kelseyhightower) Kelsey Hightower is another superstar in the tech world - when he talks, you need to listen. Hightower currently works at Google Cloud, but he spends a lot of time at conferences evangelizing for more effective cloud native development. https://twitter.com/mattrickard/status/1073285888191258624 If you're interested in anything infrastructure or cloud related, you need to follow Kelsey Hightower. Who did I miss? That's just a list of a few people in tech I think you should follow in 2019 - but who did I miss? Which accounts are essential? What podcasts and newsletters should we subscribe to?
Read more
  • 0
  • 0
  • 23868

article-image-apollo-11-source-code-how-it-became-a-small-step-for-a-woman-and-a-huge-leap-for-software-engineering
Sugandha Lahoti
19 Jul 2018
5 min read
Save for later

Apollo 11 source code: A small step for a woman, and a huge leap for 'software engineering'

Sugandha Lahoti
19 Jul 2018
5 min read
Yesterday, reddit saw an explosion of discussion around the original Apollo 11 Guidance Computer (AGC) source code. The code in its entirety was uploaded on GitHub two years ago, thanks to former NASA intern, Chris Garry. And again it seems to have undergone significant updates this week looking at the timestamps on all the files in the repo. This is a project that will always hold a special place for all software professionals around the world. This is the project that made ‘software engineering’ a real discipline. What is AGC and why it mattered for Apollo 11? AGC was a digital computer produced for the Apollo program, installed on board the Apollo 11 Command Module (CM) and Lunar Module (LM). The AGC code is also referred to as ‘COLOSSUS 2A’ and was written in AGC assembly language and stored on rope memory. On any given Apollo mission, there were two AGCs, one for the CM, and one for the LM. The two AGCs were identical and interchangeable. However, their software differed as both the LM and the CM performed different tasks pertaining to the spacecraft. The CM launched the three astronauts to the moon, and back again. The LM helped in the landing of two of the astronauts on the moon while the third astronaut remained in the CM, in orbit around the moon. The woman who coined the term, ‘software engineering’ The AGC code was brought to life by Margaret Hamilton, director of software engineering for the project. In a male-dominated world of tech and engineering of that time, Margaret was an exception. She led a team credited with developing the software for Apollo and Skylab, keeping her head high even through backlash. “People used to say to me, ‘How can you leave your daughter? How can you do this?”  She went on to become the founder and CEO of Hamilton Technologies, Inc. and was also awarded the Presidential Medal of Freedom in 2016. Hamilton is considered one of the pioneers of software engineering, credited for actually coining the term “software engineering”. She first started using the term during the early Apollo missions wanting to give software the same legitimacy as other disciplines. At that time, it was not taken seriously but over time software engineering has become an IEEE Standard. What can we learn from the AGC code developers? Understandably, the AGC specifications and processing power are very underrated as compared to the technology of today. Some still wittingly call it a calculator, instead of a computer. Others say, that the CPU in a microwave oven is probably more powerful than an AGC. Inspite of being a very basic technology in terms of processing power and speed, the Apollo 11 spacecraft was able to complete the first ever manned mission to the moon and back. This is not just a huge testament to the original programming team’s ingenuity and resourcefulness but also of their grit and meticulousness. One would think such a bunch produced serious (boring) code that has flawless execution. Read between the (code) lines and you see a team that just enjoyed every moment of writing the code with quirky naming conventions and humorous notes inside the comments. Back to the present As soon as the code was uploaded on Github two years ago and even now, coders and software programmers all over the world, are dissecting it, particularly interested in the quirky English-descriptions of code explanations. People on Reddit are terming the code files as real programming that doesn’t rely on APIs to do the heavy lifting. People are also loving the naming convention of the source code files and their programs which were 1960s inspired light-hearted jokes. For example, the BURN_BABY_BURN--MASTER IGNITION ROUTINE for the master ignition routine, and the PINBALL_GAME_BUTTONS_AND_LIGHTS.agc for keyboard and display code. Even the programs are quirky. The LUNAR_LANDING_GUIDANCE_EQUATIONS.s, file ended up having two temporary lines of code as permanent. You can read more such interesting reddit comments. However, a point worth noting is that Margaret and by extension, women in tech, are conspicuously missing in this rich discussion. We can start seeing real change only when discussion forums start including various facets of the tool/tech under discussion. People behind the tech are an important facet, and more so when they are in the minority. You can also read The Apollo Guidance Computer: Architecture and Operation the inside scoop on how the AGC functioned, what kind of design decisions and software choices the programmers had to made based on the features and limitations of the AGC among other insights. The github repo for the original Apollo 11 source code also contains material for further reading. Is space the final frontier for AI? NASA to reveal Kepler’s latest AI backed discovery NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning Meet CIMON, the first AI robot to join the astronauts aboard ISS
Read more
  • 0
  • 1
  • 23827

article-image-putting-function-functional-programming
Packt
22 Sep 2015
27 min read
Save for later

Putting the Function in Functional Programming

Packt
22 Sep 2015
27 min read
 In this article by Richard Reese, the author of the book Learning Java Functional Programming, we will cover lambda expressions in more depth. We will explain how they satisfy the mathematical definition of a function and how we can use them in supporting Java applications. In this article, you will cover several topics, including: Lambda expression syntax and type inference High-order, pure, and first-class functions Referential transparency Closure and currying (For more resources related to this topic, see here.) Our discussions cover high-order functions, first-class functions, and pure functions. Also examined are the concepts of referential transparency, closure, and currying. Examples of nonfunctional approaches are followed by their functional equivalent where practical. Lambda expressions usage A lambda expression can be used in many different situations, including: Assigned to a variable Passed as a parameter Returned from a function or method We will demonstrate how each of these are accomplished and then elaborate on the use of functional interfaces. Consider the forEach method supported by several classes and interfaces, including the List interface. In the following example, a List interface is created and the forEach method is executed against it. The forEach method expects an object that implements the Consumer interface. This will display the three cartoon character names: List<String> list = Arrays.asList("Huey", "Duey", "Luey"); list.forEach(/* Implementation of Consumer Interface*/); More specifically, the forEach method expects an object that implements the accept method, the interface's single abstract method. This method's signature is as follows: void accept(T t) The interface also has a default method, andThen, which is passed and returns an instance of the Consumer interface. We can use any of three different approaches for implementing the functionality of the accept method: Use an instance of a class that implements the Consumer interface Use an anonymous inner class Use a lambda expression We will demonstrate each method so that it will be clear how each technique works and why lambda expressions will often result in a better solution. We will start with the declaration of a class that implements the Consumer interface as shown next: public class ConsumerImpl<T> implements Consumer<T> { @Override public void accept(T t) { System.out.println(t); } } We can then use it as the argument of the forEach method: list.forEach(new ConsumerImpl<>()); Using an explicit class allows us to reuse the class or its objects whenever an instance is needed. The second approach uses an anonymous inner function as shown here: list.forEach(new Consumer<String>() { @Override public void accept(String t) { System.out.println(t); } }); This was a fairly common approach used prior to Java 8. It avoids having to explicitly declare and instantiate a class, which implements the Consumer interface. A simple statement that uses a lambda expression is shown next: list.forEach(t->System.out.println(t)); The lambda expression accepts a single argument and returns void. This matches the signature of the Consumer interface. Java 8 is able to automatically perform this matching process. This latter technique obviously uses less code, making it more succinct than the other solutions. If we desire to reuse this lambda expression elsewhere, we could have assigned it to a variable first and then used it in the forEach method as shown here: Consumer consumer = t->System.out.println(t); list.forEach(consumer); Anywhere a functional interface is expected, we can use a lambda expression. Thus, the availability of a large number of functional interfaces will enable the frequent use of lambda expressions and programs that exhibit a functional style of programming. While developers can define their own functional interfaces, which we will do shortly, Java 8 has added a large number of functional interfaces designed to support common operations. Most of these are found in the java.util.function package. We will use several of these throughout the book and will elaborate on their purpose, definition, and use as we encounter them. Functional programming concepts in Java In this section, we will examine the underlying concept of functions and how they are implemented in Java 8. This includes high-order, first-class, and pure functions. A first-class function is a function that can be used where other first-class entities can be used. These types of entities include primitive data types and objects. Typically, they can be passed to and returned from functions and methods. In addition, they can be assigned to variables. A high-order function either takes another function as an argument or returns a function as the return value. Languages that support this type of function are more flexible. They allow a more natural flow and composition of operations. Pure functions have no side effects. The function does not modify nonlocal variables and does not perform I/O. High-order functions We will demonstrate the creation and use of the high-order function using an imperative and a functional approach to convert letters of a string to lowercase. The next code sequence reuses the list variable, developed in the previous section, to illustrate the imperative approach. The for-each statement iterates through each element of the list using the String class' toLowerCase method to perform the conversion: for(String element : list) { System.out.println(element.toLowerCase()); } The output will be each name in the list displayed in lowercase, each on a separate line. To demonstrate the use of a high-order function, we will create a function called, processString, which is passed a function as the first parameter and then apply this function to the second parameter as shown next:   public String processString(Function<String,String> operation,String target) { return operation.apply(target); } The function passed will be an instance of the java.util.function package's Function interface. This interface possesses an accept method that is passed one data type and returns a potentially different data type. With our definition, it is passed String and returns String. In the next code sequence, a lambda expression using the toLowerCase method is passed to the processString method. As you may remember, the forEach method accepts a lambda expression, which matches the Consumer interface's accept method. The lambda expression passed to the processString method matches the Function interface's accept method. The output is the same as produced by the equivalent imperative implementation. list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); We could have also used a method reference as show next: list.forEach(s ->System.out.println( processString(String::toLowerCase, s))); The use of the high-order function may initially seem to be a bit convoluted. We needed to create the processString function and then pass either a lambda expression or a method reference to perform the conversion. While this is true, the benefit of this approach is flexibility. If we needed to perform a different string operation other than converting the target string to lowercase, we will need to essentially duplicate the imperative code and replace toLowerCase with a new method such as toUpperCase. However, with the functional approach, all we need to do is replace the method used as shown next: list.forEach(s ->System.out.println(processString(t- >t.toUpperCase(), s))); This is simpler and more flexible. A lambda expression can also be passed to another lambda expression. Let's consider another example where high-order functions can be useful. Suppose we need to convert a list of one type into a list of a different type. We might have a list of strings that we wish to convert to their integer equivalents. We might want to perform a simple conversion or perhaps we might want to double the integer value. We will use the following lists:   List<String> numberString = Arrays.asList("12", "34", "82"); List<Integer> numbers = new ArrayList<>(); List<Integer> doubleNumbers = new ArrayList<>(); The following code sequence uses an iterative approach to convert the string list into an integer list:   for (String num : numberString) { numbers.add(Integer.parseInt(num)); } The next sequence uses a stream to perform the same conversion: numbers.clear(); numberString .stream() .forEach(s -> numbers.add(Integer.parseInt(s))); There is not a lot of difference between these two approaches, at least from a number of lines perspective. However, the iterative solution will only work for the two lists: numberString and numbers. To avoid this, we could have written the conversion routine as a method. We could also use lambda expression to perform the same conversion. The following two lambda expression will convert a string list to an integer list and from a string list to an integer list where the integer has been doubled:   Function<List<String>, List<Integer>> singleFunction = s -> { s.stream() .forEach(t -> numbers.add(Integer.parseInt(t))); return numbers; }; Function<List<String>, List<Integer>> doubleFunction = s -> { s.stream() .forEach(t -> doubleNumbers.add( Integer.parseInt(t) * 2)); return doubleNumbers; }; We can apply these two functions as shown here: numbers.clear(); System.out.println(singleFunction.apply(numberString)); System.out.println(doubleFunction.apply(numberString)); The output follows: [12, 34, 82] [24, 68, 164] However, the real power comes from passing these functions to other functions. In the next code sequence, a stream is created consisting of a single element, a list. This list contains a single element, the numberString list. The map method expects a Function interface instance. Here, we use the doubleFunction function. The list of strings is converted to integers and then doubled. The resulting list is displayed: Arrays.asList(numberString).stream() .map(doubleFunction) .forEach(s -> System.out.println(s)); The output follows: [24, 68, 164] We passed a function to a method. We could easily pass other functions to achieve different outputs. Returning a function When a value is returned from a function or method, it is intended to be used elsewhere in the application. Sometimes, the return value is used to determine how subsequent computations should proceed. To illustrate how returning a function can be useful, let's consider a problem where we need to calculate the pay of an employee based on the numbers of hours worked, the pay rate, and the employee type. To facilitate the example, start with an enumeration representing the employee type: enum EmployeeType {Hourly, Salary, Sales}; The next method illustrates one way of calculating the pay using an imperative approach. A more complex set of computation could be used, but these will suffice for our needs: public float calculatePay(int hoursWorked, float payRate, EmployeeType type) { switch (type) { case Hourly: return hoursWorked * payRate; case Salary: return 40 * payRate; case Sales: return 500.0f + 0.15f * payRate; default: return 0.0f; } } If we assume a 7 day workweek, then the next code sequence shows an imperative way of calculating the total number of hours worked: int hoursWorked[] = {8, 12, 8, 6, 6, 5, 6, 0}; int totalHoursWorked = 0; for (int hour : hoursWorked) { totalHoursWorked += hour; } Alternatively, we could have used a stream to perform the same operation as shown next. The Arrays class's stream method accepts an array of integers and converts it into a Stream object. The sum method is applied fluently, returning the number of hours worked: totalHoursWorked = Arrays.stream(hoursWorked).sum(); The latter approach is simpler and easier to read. To calculate and display the pay, we can use the following statement which, when executed, will return 803.25.    System.out.println( calculatePay(totalHoursWorked, 15.75f, EmployeeType.Hourly)); The functional approach is shown next. A calculatePayFunction method is created that is passed by the employee type and returns a lambda expression. This will compute the pay based on the number of hours worked and the pay rate. This lambda expression is based on the BiFunction interface. It has an accept method that takes two arguments and returns a value. Each of the parameters and the return type can be of different data types. It is similar to the Function interface's accept method, except that it is passed two arguments instead of one. The calculatePayFunction method is shown next. It is similar to the imperative's calculatePay method, but returns a lambda expression: public BiFunction<Integer, Float, Float> calculatePayFunction( EmployeeType type) { switch (type) { case Hourly: return (hours, payRate) -> hours * payRate; case Salary: return (hours, payRate) -> 40 * payRate; case Sales: return (hours, payRate) -> 500f + 0.15f * payRate; default: return null; } } It can be invoked as shown next: System.out.println( calculatePayFunction(EmployeeType.Hourly) .apply(totalHoursWorked, 15.75f)); When executed, it will produce the same output as the imperative solution. The advantage of this approach is that the lambda expression can be passed around and executed in different contexts. First-class functions To demonstrate first-class functions, we use lambda expressions. Assigning a lambda expression, or method reference, to a variable can be done in Java 8. Simply declare a variable of the appropriate function type and use the assignment operator to do the assignment. In the following statement, a reference variable to the previously defined BiFunction-based lambda expression is declared along with the number of hours worked: BiFunction<Integer, Float, Float> calculateFunction; int hoursWorked = 51; We can easily assign a lambda expression to this variable. Here, we use the lambda expression returned from the calculatePayFunction method: calculateFunction = calculatePayFunction(EmployeeType.Hourly); The reference variable can then be used as shown in this statement: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); It produces the same output as before. One shortcoming of the way an hourly employee's pay is computed is that overtime pay is not handled. We can add this functionality to the calculatePayFunction method. However, to further illustrate the use of reference variables, we will assign one of two lambda expressions to the calculateFunction variable based on the number of hours worked as shown here: if(hoursWorked<=40) { calculateFunction = (hours, payRate) -> 40 * payRate; } else { calculateFunction = (hours, payRate) -> hours*payRate + (hours-40)*1.5f*payRate; } When the expression is evaluated as shown next, it returns a value of 1063.125: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); Let's rework the example developed in the High-order functions section, where we used lambda expressions to display the lowercase values of an array of string. Part of the code has been duplicated here for your convenience: list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); Instead, we will use variables to hold the lambda expressions for the Consumer and Function interfaces as shown here: Consumer<String> consumer; consumer = s -> System.out.println(toLowerFunction.apply(s)); Function<String,String> toLowerFunction; toLowerFunction= t -> t.toLowerCase(); The declaration and initialization could have been done with one statement for each variable. To display all of the names, we simply use the consumer variable as the argument of the forEach method: list.forEach(consumer); This will display the names as before. However, this is much easier to read and follow. The ability to use lambda expressions as first-class entities makes this possible. We can also assign method references to variables. Here, we replaced the initialization of the function variable with a method reference: function = String::toLowerCase; The output of the code will not change. The pure function The pure function is a function that has no side effects. By side effects, we mean that the function does not modify nonlocal variables and does not perform I/O. A method that squares a number is an example of a pure method with no side effects as shown here: public class SimpleMath { public static int square(int x) { return x * x; } } Its use is shown here and will display the result, 25: System.out.println(SimpleMath.square(5)); An equivalent lambda expression is shown here: Function<Integer,Integer> squareFunction = x -> x*x; System.out.println(squareFunction.apply(5)); The advantages of pure functions include the following: They can be invoked repeatedly producing the same results There are no dependencies between functions that impact the order they can be executed They support lazy evaluation They support referential transparency We will examine each of these advantages in more depth. Support repeated execution Using the same arguments will produce the same results. The previous square operation is an example of this. Since the operation does not depend on other external values, re-executing the code with the same arguments will return the same results. This supports the optimization technique call memoization. This is the process of caching the results of an expensive execution sequence and retrieving them when they are used again. An imperative technique for implementing this approach involves using a hash map to store values that have already been computed and retrieving them when they are used again. Let's demonstrate this using the square function. The technique should be used for those functions that are compute intensive. However, using the square function will allow us to focus on the technique. Declare a cache to hold the previously computed values as shown here: private final Map<Integer, Integer> memoizationCache = new HashMap<>(); We need to declare two methods. The first method, called doComputeExpensiveSquare, does the actual computation as shown here. A display statement is included only to verify the correct operation of the technique. Otherwise, it is not needed. The method should only be called once for each unique value passed to it. private Integer doComputeExpensiveSquare(Integer input) { System.out.println("Computing square"); return 2 * input; } A second method is used to detect when a value is used a subsequent time and return the previously computed value instead of calling the square method. This is shown next. The containsKey method checks to see if the input value has already been used. If it hasn't, then the doComputeExpensiveSquare method is called. Otherwise, the cached value is returned. public Integer computeExpensiveSquare(Integer input) { if (!memoizationCache.containsKey(input)) { memoizationCache.put(input, doComputeExpensiveSquare(input)); } return memoizationCache.get(input); } The use of the technique is demonstrated with the next code sequence: System.out.println(computeExpensiveSquare(4)); System.out.println(computeExpensiveSquare(4)); The output follows, which demonstrates that the square method was only called once: Computing square 16 16 The problem with this approach is the declaration of a hash map. This object may be inadvertently used by other elements of the program and will require the explicit declaration of new hash maps for each memoization usage. In addition, it does not offer flexibility in handling multiple memoization. A better approach is available in Java 8. This new approach wraps the hash map in a class and allows easier creation and use of memoization. Let's examine a memoization class as adapted from http://java.dzone.com/articles/java-8-automatic-memoization. It is called Memoizer. It uses ConcurrentHashMap to cache value and supports concurrent access from multiple threads. Two methods are defined. The doMemoize method returns a lambda expression that does all of the work. The memorize method creates an instance of the Memoizer class and passes the lambda expression implementing the expensive operation to the doMemoize method. The doMemoize method uses the ConcurrentHashMap class's computeIfAbsent method to determine if the computation has already been performed. If the value has not been computed, it executes the Function interface's apply method against the function argument: public class Memoizer<T, U> { private final Map<T, U> memoizationCache = new ConcurrentHashMap<>(); private Function<T, U> doMemoize(final Function<T, U> function) { return input -> memoizationCache.computeIfAbsent(input, function::apply); } public static <T, U> Function<T, U> memoize(final Function<T, U> function) { return new Memoizer<T, U>().doMemoize(function); } } A lambda expression is created for the square operation: Function<Integer, Integer> squareFunction = x -> { System.out.println("In function"); return x * x; }; The memoizationFunction variable will hold the lambda expression that is subsequently used to invoke the square operations: Function<Integer, Integer> memoizationFunction = Memoizer.memoize(squareFunction); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); The output of this sequence follows where the square operation is performed only once: In function 4 4 4 We can easily use the Memoizer class for a different function as shown here: Function<Double, Double> memoizationFunction2 = Memoizer.memoize(x -> x * x); System.out.println(memoizationFunction2.apply(4.0)); This will square the number as expected. Functions that are recursive present additional problems. Eliminating dependencies between functions When dependencies between functions are eliminated, then more flexibility in the order of execution is possible. Consider these Function and BiFunction declarations, which define simple expressions for computing hourly, salaried, and sales type pay, respectively: BiFunction<Integer, Double, Double> computeHourly = (hours, rate) -> hours * rate; Function<Double, Double> computeSalary = rate -> rate * 40.0; BiFunction<Double, Double, Double> computeSales = (rate, commission) -> rate * 40.0 + commission; These functions can be executed, and their results are assigned to variables as shown here: double hourlyPay = computeHourly.apply(35, 12.75); double salaryPay = computeSalary.apply(25.35); double salesPay = computeSales.apply(8.75, 2500.0); These are pure functions as they do not use external values to perform their computations. In the following code sequence, the sum of all three pays are totaled and displayed: System.out.println(computeHourly.apply(35, 12.75) + computeSalary.apply(25.35) + computeSales.apply(8.75, 2500.0)); We can easily reorder their execution sequence or even execute them concurrently, and the results will be the same. There are no dependencies between the functions that restrict them to a specific execution ordering. Supporting lazy evaluation Continuing with this example, let's add an additional sequence, which computes the total pay based on the type of employee. The variable, hourly, is set to true if we want to know the total of the hourly employee pay type. It will be set to false if we are interested in salary and sales-type employees: double total = 0.0; boolean hourly = ...; if(hourly) { total = hourlyPay; } else { total = salaryPay + salesPay; } System.out.println(total); When this code sequence is executed with an hourly value of false, there is no need to execute the computeHourly function since it is not used. The runtime system could conceivably choose not to execute any of the lambda expressions until it knows which one is actually used. While all three functions are actually executed in this example, it illustrates the potential for lazy evaluation. Functions are not executed until needed. Referential transparency Referential transparency is the idea that a given expression is made up of subexpressions. The value of the subexpression is important. We are not concerned about how it is written or other details. We can replace the subexpression with its value and be perfectly happy. With regards to pure functions, they are said to be referentially transparent since they have same effect. In the next declaration, we declare a pure function called pureFunction: Function<Double,Double> pureFunction = t -> 3*t; It supports referential transparency. Consider if we declare a variable as shown here: int num = 5; Later, in a method we can assign a different value to the variable: num = 6; If we define a lambda expression that uses this variable, the function is no longer pure: Function<Double,Double> impureFunction = t -> 3*t+num; The function no longer supports referential transparency. Closure in Java The use of external variables in a lambda expression raises several interesting questions. One of these involves the concept of closures. A closure is a function that uses the context within which it was defined. By context, we mean the variables within its scope. This sometimes is referred to as variable capture. We will use a class called ClosureExample to illustrate closures in Java. The class possesses a getStringOperation method that returns a Function lambda expression. This expression takes a string argument and returns an augmented version of it. The argument is converted to lowercase, and then its length is appended to it twice. In the process, both an instance variable and a local variable are used. In the implementation that follows, the instance variable and two local variables are used. One local variable is a member of the getStringOperation method and the second one is a member of the lambda expression. They are used to hold the length of the target string and for a separator string: public class ClosureExample { int instanceLength; public Function<String,String> getStringOperation() { final String seperator = ":"; return target -> { int localLength = target.length(); instanceLength = target.length(); return target.toLowerCase() + seperator + instanceLength + seperator + localLength; }; } } The lambda expression is created and used as shown here: ClosureExample ce = new ClosureExample(); final Function<String,String> function = ce.getStringOperation(); System.out.println(function.apply("Closure")); Its output follows: closure:7:7 Variables used by the lambda expression are restricted in their use. Local variables or parameters cannot be redefined or modified. These variables need to be effectively final. That is, they must be declared as final or not be modified. If the local variable and separator, had not been declared as final, the program would still be executed properly. However, if we tried to modify the variable later, then the following syntax error would be generated, indicating such variable was not permitted within a lambda expression: local variables referenced from a lambda expression must be final or effectively final If we add the following statements to the previous example and remove the final keyword, we will get the same syntax error message: function = String::toLowerCase; Consumer<String> consumer = s -> System.out.println(function.apply(s)); This is because the function variable is used in the Consumer lambda expression. It also needs to be effectively final, but we tried to assign a second value to it, the method reference for the toLowerCase method. Closure refers to functions that enclose variable external to the function. This permits the function to be passed around and used in different contexts. Currying Some functions can have multiple arguments. It is possible to evaluate these arguments one-by-one. This process is called currying and normally involves creating new functions, which have one fewer arguments than the previous one. The advantage of this process is the ability to subdivide the execution sequence and work with intermediate results. This means that it can be used in a more flexible manner. Consider a simple function such as: f(x,y) = x + y The evaluation of f(2,3) will produce a 5. We could use the following, where the 2 is "hardcoded": f(2,y) = 2 + y If we define: g(y) = 2 + y Then the following are equivalent: f(2,y) = g(y) = 2 + y Substituting 3 for y we get: f(2,3) = g(3) = 2 + 3 = 5 This is the process of currying. An intermediate function, g(y), was introduced which we can pass around. Let's see, how something similar to this can be done in Java 8. Start with a BiFunction method designed for concatenation of strings. A BiFunction method takes two parameters and returns a single value: BiFunction<String, String, String> biFunctionConcat = (a, b) -> a + b; The use of the function is demonstrated with the following statement: System.out.println(biFunctionConcat.apply("Cat", "Dog")); The output will be the CatDog string. Next, let's define a reference variable called curryConcat. This variable is a Function interface variable. This interface is based on two data types. The first one is String and represents the value passed to the Function interface's accept method. The second data type represents the accept method's return type. This return type is defined as a Function instance that is passed a string and returns a string. In other words, the curryConcat function is passed a string and returns an instance of a function that is passed and returns a string. Function<String, Function<String, String>> curryConcat; We then assign an appropriate lambda expression to the variable: curryConcat = (a) -> (b) -> biFunctionConcat.apply(a, b); This may seem to be a bit confusing initially, so let's take it one piece at a time. First of all, the lambda expression needs to return a function. The lambda expression assigned to curryConcat follows where the ellipses represent the body of the function. The parameter, a, is passed to the body: (a) ->...; The actual body follows: (b) -> biFunctionConcat.apply(a, b); This is the lambda expression or function that is returned. This function takes two parameters, a and b. When this function is created, the a parameter will be known and specified. This function can be evaluated later when the value for b is specified. The function returned is an instance of a Function interface, which is passed two parameters and returns a single value. To illustrate this, define an intermediate variable to hold this returned function: Function<String,String> intermediateFunction; We can assign the result of executing the curryConcat lambda expression using it's apply method as shown here where a value of Cat is specified for the a parameter: intermediateFunction = curryConcat.apply("Cat"); The next two statements will display the returned function: System.out.println(intermediateFunction); System.out.println(curryConcat.apply("Cat")); The output will look something similar to the following: packt.Chapter2$$Lambda$3/798154996@5305068a packt.Chapter2$$Lambda$3/798154996@1f32e575 Note that these are the values representing this functions as returned by the implied toString method. They are both different, indicating that two different functions were returned and can be passed around. Now that we have confirmed a function has been returned, we can supply a value for the b parameter as shown here: System.out.println(intermediateFunction.apply("Dog")); The output will be CatDog. This illustrates how we can split a two parameter function into two distinct functions, which can be evaluated when desired. They can be used together as shown with these statements: System.out.println(curryConcat.apply("Cat").apply("Dog")); System.out.println(curryConcat.apply("Flying ").apply("Monkeys")); The output of these statements is as follows: CatDog Flying Monkeys We can define a similar operation for doubles as shown here: Function<Double, Function<Double, Double>> curryAdd = (a) -> (b) -> a * b; System.out.println(curryAdd.apply(3.0).apply(4.0)); This will display 12.0 as the returned value. Currying is a valuable approach useful when the arguments of a function need to be evaluated at different times. Summary In this article, we investigated the use of lambda expressions and how they support the functional style of programming in Java 8. When possible, we used examples to contrast the use of classes and methods against the use of functions. This frequently led to simpler and more maintainable functional implementations. We illustrated how lambda expressions support the functional concepts of high-order, first-class, and pure functions. Examples were used to help clarify the concept of referential transparency. The concepts of closure and currying are found in most functional programming languages. We provide examples of how they are supported in Java 8. Lambda expressions have a specific syntax, which we examined in more detail. Also, there are several variations of the function that can be used to support the expression in the form, which we illustrated. Lambda expressions are based on functional interfaces using type inference. It is important to understand how to create functional interfaces and to know what standard functional interfaces are available in Java 8. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article] Finding Peace in REST[article] Introducing JAX-RS API [article]
Read more
  • 0
  • 0
  • 23805

article-image-26-new-java-9-enhancements-you-will-love
Aarthi Kumaraswamy
09 Apr 2018
11 min read
Save for later

26 new Java 9 enhancements you will love

Aarthi Kumaraswamy
09 Apr 2018
11 min read
Java 9 represents a major release and consists of a large number of internal changes to the Java platform. Collectively, these internal changes represent a tremendous set of new possibilities for Java developers, some stemming from developer requests, others from Oracle-inspired enhancements. In this post, we will review 26 of the most important changes. Each change is related to a JDK Enhancement Proposal (JEP). JEPs are indexed and housed at openjdk.java.net/jeps/0. You can visit this site for additional information on each JEP. [box type="note" align="" class="" width=""]The JEP program is part of Oracle's support for open source, open innovation, and open standards. While other open source Java projects can be found, OpenJDK is the only one supported by Oracle. [/box] These changes have several impressive implications, including: Heap space efficiencies Memory allocation Compilation process improvements Type testing Annotations Automated runtime compiler tests Improved garbage collection 26 Java 9 enhancements you should know Improved Contended Locking [JEP 143] The general goal of JEP 143 was to increase the overall performance of how the JVM manages contention over locked Java object monitors. The improvements to contended locking were all internal to the JVM and do not require any developer actions to benefit from them. The overall improvement goals were related to faster operations. These include faster monitor enter, faster monitor exit, and faster notifications. 2. Segmented code cache [JEP 197] The segmented code cache JEP (197) upgrade was completed and results in faster, more efficient execution time. At the core of this change was the segmentation of the code cache into three distinct segments--non-method, profiled, and non-profiled code. 3. Smart Java compilation, phase two [JEP 199] The JDK Enhancement Proposal 199 is aimed at improving the code compilation process. All Java developers will be familiar with the javac tool for compiling source code to bytecode, which is used by the JVM to run Java programs. Smart Java Compilation, also referred to as Smart Javac and sjavac, adds a smart wrapper around the javac process. Perhaps the core improvement sjavac adds is that only the necessary code is recompiled. [box type="shadow" align="" class="" width=""]Check out this tutorial to know how you can recognize patterns with neural networks in Java.[/box] 4. Resolving Lint and Doclint warnings [JEP 212] Both Lint and Doclint report errors and warnings during the compile process. Resolution of these warnings was the focus of JEP 212. When using core libraries, there should not be any warnings. This mindset led to JEP 212, which has been resolved and implemented in Java 9. 5. Tiered attribution for javac [JEP 215] JEP 215 represents an impressive undertaking to streamline javac's type checking schema. In Java 8, type checking of poly expressions is handled by a speculative attribution tool. The goal with JEP 215 was to change the type checking schema to create faster results. The new approach, released with Java 9, uses a tiered attribution tool. This tool implements a tiered approach for type checking argument expressions for all method calls. Permissions are also made for method overriding. 6. Annotations pipeline 2.0 [JEP 217] Java 8 related changes impacted Java annotations but did not usher in a change to how javac processed them. There were some hardcoded solutions that allowed javac to handle the new annotations, but they were not efficient. Moreover, this type of coding (hardcoding workarounds) is difficult to maintain. So, JEP 217 focused on refactoring the javac annotation pipeline. This refactoring was all internal to javac, so it should not be evident to developers. 7. New version-string scheme [JEP 223] Prior to Java 9, the release numbers did not follow industry standard versioning--semantic versioning. Oracle has embraced semantic versioning for Java 9 and beyond. For Java, a major-minor-security schema will be used for the first three elements of Java version numbers: Major: A major release consisting of a significant new set of features Minor: Revisions and bug fixes that are backward compatible Security: Fixes deemed critical to improving security 8. Generating run-time compiler tests automatically [JEP 233] The purpose of JEP 233 was to create a tool that could automate the runtime compiler tests. The tool that was created starts by generating a random set of Java source code and/or byte code. The generated code will have three key characteristics: Be syntactically correct Be semantically correct Use a random seed that permits reusing the same randomly-generated code 9. Testing class-file attributes generated by Javac [JEP 235] Prior to Java 9, there was no method of testing a class-file's attributes. Running a class and testing the code for anticipated or expected results was the most commonly used method of testing javac generated class-files. This technique falls short of testing to validate the file's attributes. The lack of, or insufficient, capability to create tests for class-file attributes was the impetus behind JEP 235. The goal is to ensure javac creates a class-file's attributes completely and correctly. 10. Storing interned strings in CDS archives [JEP 250] CDS archives now allocate specific space on the heap for strings: The string space is mapped using a shared-string table, hash tables, and deduplication. 11. Preparing JavaFX UI controls and CSS APIs for modularization [JEP 253] Prior to Java 9, JavaFX controls as well as CSS functionality were only available to developers by interfacing with internal APIs. Java 9's modularization has made the internal APIs inaccessible. Therefore, JEP 253 was created to define public, instead of internal, APIs. This was a larger undertaking than it might seem. Here are a few actions that were taken as part of this JEP: Moving javaFX control skins from the internal to public API (javafx.scene.skin) Ensuring API consistencies Generation of a thorough javadoc 12. Compact strings [JEP 254] The string data type is an important part of nearly every Java app. While JEP 254's aim was to make strings more space-efficient, it was approached with caution so that existing performance and compatibilities would not be negatively impacted. Starting with Java 9, strings are now internally represented using a byte array along with a flag field for encoding references. 13. Merging selected Xerces 2.11.0 updates into JAXP [JEP 255] Xerces is a library used for parsing XML in Java. It was updated to 2.11.0 in late 2010, so JEP 255's aim was to update JAXP to incorporate changes in Xerces 2.11.0. 14. Updating JavaFX/Media to a newer version of GStreamer [JEP 257] The purpose of JEP 257 was to ensure JavaFX/Media was updated to include the latest release of GStreamer for stability, performance, and security assurances. GStreamer is a multimedia processing framework that can be used to build systems that take in media from several different formats and, after processing, export them in selected formats. 15. HarfBuzz Font-Layout Engine [JEP 258] Prior to Java 9, the layout engine used to handle font complexities; specifically, fonts that have rendering behaviors beyond what the common Latin fonts have. Java used the uniform client interface, also referred to as ICU, as the defacto text rendering tool. The ICU layout engine has been depreciated and, in Java 9, has been replaced with the HarfBuzz font layout engine. HarfBuzz is an OpenType text rendering engine. This type of layout engine has the characteristic of providing script-aware code to help ensure text is laid out as desired. 16. HiDPI graphics on Windows and Linux [JEP 263] JEP 263 was focused on ensuring the crispness of on-screen components, relative to the pixel density of the display. The following terms are relevant to this JEP and are provided along with the below listed descriptive information: DPI-aware application: An application that is able to detect and scale images for the display's specific pixel density DPI-unaware application: An application that makes no attempt to detect and scale images for the display's specific pixel density HiDPI graphics: High dots-per-inch graphics Retina display: This term was created by Apple to refer to displays with a pixel density of at least 300 pixels per inch Prior to Java 9, automatic scaling and sizing were already implemented in Java for the Mac OS X operating system. This capability was added in Java 9 for Windows and Linux operating systems. 17. Marlin graphics renderer [JEP 265] JEP 265 replaced the Pisces graphics rasterizer with the Marlin graphics renderer in the Java 2D API. This API is used to draw 2D graphics and animations. The goal was to replace Pisces with a rasterizer/renderer that was much more efficient and without any quality loss. This goal was realized in Java 9. An intended collateral benefit was to include a developer-accessible API. Previously, the means of interfacing with the AWT and Java 2D was internal. 18. Unicode 8.0.0 [JEP 267] Unicode 8.0.0 was released on June 17, 2015. JEP 267 focused on updating the relevant APIs to support Unicode 8.0.0. In order to fully comply with the new Unicode standard, several Java classes were updated. The following listed classes were updated for Java 9 to comply with the new Unicode standard: java.awt.font.NumericShaper java.lang.Character java.lang.String java.text.Bidi java.text.BreakIterator java.text.Normalizer 19. Reserved stack areas for critical sections [JEP 270] The goal of JEP 270 was to mitigate problems stemming from stack overflows during the execution of critical sections. This mitigation took the form of reserving additional thread stack space. [box type="shadow" align="" class="" width=""]Are you looking out for running parallel data operations using Java streams, check out this post for more details.[/box] 20. Dynamic linking of language-defined object models [JEP 276] Java interoperability was enhanced with JEP 276. The necessary JDK changes were made to permit runtime linkers from multiple languages to coexist in a single JVM instance. This change applies to high-level operations, as you would expect. An example of a relevant high-level operation is the reading or writing of a property with elements such as accessors and mutators. The high-level operations apply to objects of unknown types. They can be invoked with INVOKEDYNAMIC instructions. Here is an example of calling an object's property when the object's type is unknown at compile time:   INVOKEDYNAMIC "dyn:getProp:age" 21. Additional tests for humongous objects in G1 [JEP 278] One of the long-favored features of the Java platform is the behind the scenes garbage collection. JEP 278's focus was to create additional WhiteBox tests for humongous objects as a feature of the G1 garbage collector. 22. Improving test-failure troubleshooting [JEP 279] For developers that do a lot of testing, JEP 279 is worth reading about. Additional functionality has been added in Java 9 to automatically collect information to support troubleshooting test failures as well as timeouts. Collecting readily available diagnostic information during tests stands to provide developers and engineers with greater fidelity in their logs and other output. 23. Optimizing string concatenation [JEP 280] JEP 280 is an interesting enhancement for the Java platform. Prior to Java 9, string concatenation was translated by javac into StringBuilder : : append chains. This was a sub-optimal translation methodology often requiring StringBuilder presizing. The enhancement changed the string concatenation bytecode sequence, generated by javac, so that it uses INVOKEDYNAMIC calls. The purpose of the enhancement was to increase optimization and to support future optimizations without the need to reformat the javac's bytecode. 24. HotSpot C++ unit-test framework [JEP 281] HotSpot is the name of the JVM. This Java enhancement was intended to support the development of C++ unit tests for the JVM. Here is a partial, non-prioritized, list of goals for this enhancement: Command-line testing Create appropriate documentation Debug compile targets Framework elasticity IDE support Individual and isolated unit testing Individualized test results Integrate with existing infrastructure Internal test support Positive and negative testing Short execution time testing Support all JDK 9 build platforms Test compile targets Test exclusion Test grouping Testing that requires the JVM to be initialized Tests co-located with source code Tests for platform-dependent code Write and execute unit testing (for classes and methods) This enhancement is evidence of the increasing extensibility. 25. Enabling GTK 3 on Linux [JEP 283] GTK+, formally known as the GIMP toolbox, is a cross-platform tool used for creating Graphical User Interfaces (GUI). The tool consists of widgets accessible through its API. JEP 283's focus was to ensure GTK 2 and GTK 3 were supported on Linux when developing Java applications with graphical components. The implementation supports Java apps that employ JavaFX, AWT, and Swing. 26. New HotSpot build system [JEP 284] The Java platform used, prior to Java 9, was a build system riddled with duplicate code, redundancies, and other inefficiencies. The build system has been reworked for Java 9 based on the build-infra framework. In this context, infra is short for infrastructure. The overarching goal for JEP 284 was to upgrade the build system to one that was simplified. Specific goals included: Leverage existing build system Maintainable code Minimize duplicate code Simplification Support future enhancements Summary We explored some impressive new features of the Java platform, with a specific focus on javac, JDK libraries, and various test suites. Memory management improvements, including heap space efficiencies, memory allocation, and improved garbage collection represent a powerful new set of Java platform enhancements. Changes regarding the compilation process resulting in greater efficiencies were part of this discussion We also covered important improvements, such as with the compilation process, type testing, annotations, and automated runtime compiler tests. You just enjoyed an excerpt from the book, Mastering Java 9 written by By Dr. Edward Lavieri and Peter Verhas.  
Read more
  • 0
  • 0
  • 23716

article-image-java-refactoring-netbeans
Packt
08 Jun 2011
7 min read
Save for later

Java Refactoring in NetBeans

Packt
08 Jun 2011
7 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Be warned that many of the refactoring techniques presented in this article might break some code. NetBeans, and other IDEs for that matter too, make it easier to revert changes but of course be wary of things going wrong. With that in mind, let's dig in. Renaming elements This recipe focuses on how the IDE handles the renaming of all elements of a project, being the project itself, classes, methods, variables, and packages. How to do it... Let's create the code to be renamed: Create a new project, this can be achieved by either clicking File and then New Project or pressing Ctrl+Shift+N. On New Project window, choose Java on the Categories side, and on the Projects side select Java Application. Then click Next. Under Name and Location: name the project as RenameElements and click Finish. With the project created we will need to clear the RenameElements.java class of the main method and insert the following code: package renameelements; import java.io.File; public class RenameElements { private void printFiles(String string) { File file = new File(string); if (file.isFile()) { System.out.println(file.getPath()); } else if (file.isDirectory()) { for(String directory : file.list()) printFiles(string + file.separator + directory); } if (!file.exists()) System.out.println(string + " does not exist."); } } The next step is to rename the package, so place the cursor on top of the package name, renameelements, and press Ctrl+R. A Rename dialog pops-up with the package name. Type util under New Name and click on Refactor. Our class contains several variables we can rename: Place the cursor on the top of the String parameter named string and press Ctrl+R. Type path and press Enter Let's rename the other variables: Rename file into filePath. To rename methods, perform the steps below: Place the cursor on the top of the method declaration, printFiles, right-click it then select Refactor and Rename.... On the Rename Method dialog, under New Name enter recursiveFilePrinting and press Refactor. Then let's rename classes: To rename a class navigate to the Projects window and press Ctrl+R on the RenameElements.java file. On the Rename Class dialog enter FileManipulator and press Enter. And finally renaming an entire project: Navigate to the Project window, right-click on the project name, RenamingElements, and choose Rename.... Under Project Name enter FileSystem and tick Also Rename Project Folder; after that, click on Rename. How it works... Renaming a project works a bit differently from renaming a variable, since in this action NetBeans needs to rename the folder where the project is placed. The Ctrl+R shortcut is not enough in itself so NetBeans shows the Rename Project dialog. This emphasizes to the developer that something deeper is happening. When renaming a project, NetBeans gives the developer the possibility of renaming the folder where the project is contained to the same name of the project. This is a good practice and, more often than not, is followed. Moving elements NetBeans enables the developer to easily move classes around different projects and packages. No more breaking compatibility when moving those classes around, since all are seamlessly handled by the IDE. Getting ready For this recipe we will need a Java project and a Java class so we can exemplify how moving elements really work. The exisiting code, created in the previous recipe, is going to be enough. Also you can try doing this with your own code since moving classes are not such a complicated step that can't be undone. Let's create a project: Create a new project, which can be achieved either by clicking File and then New Project or pressing Ctrl+Shift+N. In the New Project window, choose Java on the Categories side and Java Application on the Projects side, then click Next. Under Name and Location, name the Project as MovingElements and click Finish. Now right-click on the movingelements package, select New... and Java Class.... On the New Java Class dialog enter the class name as Person. Leave all the other fields with their default values and click Finish. How to do it... Place the cursor inside of Person.java and press Ctrl+M. Select a working project from Project field. Select Source Packages in the Location field. Under the To Package field enter: classextraction: How it works... When clicking the Refactor button the class is removed from the current project and placed in the project that was selected from the dialog. The package in that class is then updated to match. Extracting a superclass Extracting superclasses enables NetBeans to add different levels of hierarchy even after the code is written. Usually, requirements changing in the middle of development, and rewriting classes to support inheritance would quite complicated and time-consuming. NetBeans enables the developer to create those superclasses in a few clicks and, by understanding how this mechanism works, even creates superclasses that extend other superclasses. Getting ready We will need to create a Project based on the Getting Ready section of the previous recipe, since it is very similar. The only change from the previous recipe is that this recipe's project name will be SuperClassExtraction. After project creation: Right-click on the superclassextraction package, select New... and Java Class.... On the New Java Class dialog enter the class name as DataAnalyzer. Leave all the other fields with their default values and click Finish. Replace the entire content of the DataAnalyzer.java with the following code: package superclassextraction; import java.util.ArrayList; public class DataAnalyzer { ArrayList<String> data; static final boolean CORRECT = true; static final boolean INCORRECT = false; private void fetchData() { //code } void saveData() { } public boolean parseData() { return CORRECT; } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } Now let's extract our superclass. How to do it... Right-click inside of the DataAnalyzer.java class, select Refactor and Extract Superclass.... When the Extract Superclass dialog appears, enter Superclass Name as Analyzer. On Members to Extract, select all members, but leave saveData out. Under the Make Abstract column select analyzeData() and leave parseData(), saveData(), fetchData() out. Then click Refactor. How it works... When the Refactor button is pressed, NetBeans copies the marked methods from DataAnalyzer.java and re-creates them in the superclass. NetBeans deals intelligently with methods marked as abstract. The abstract methods are moved up in the hierarchy and the implementation is left in the concrete class. In our example analyzeData is moved to the abstract class but marked as abstract; the real implementation is then left in DataAnalyzer. NetBeans also supports the moving of fields, in our case the CORRECT and INCORRECT fields. The following is the code in DataAnalyzer.java: public class DataAnalyzer extends Analyzer { public void saveData() { //code } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } The following is the code in Analyzer.java: public abstract class Analyzer { static final boolean CORRECT = true; static final boolean INCORRECT = false; ArrayList<String> data; public Analyzer() { } public abstract String analyzeData(ArrayList<String> data, int offset); public void fetchData() { //code } public boolean parseData() { //code return DataAnalyzer.CORRECT; } } There's more... Let's learn how to implement parent class methods. Implementing parent class methods Let's add a method to the parent class: Open Analyzer.java and enter the following code: public void clearData(){ data.clear(); } Save the file. Open DataAnalyzer.java, press Alt+Insert and select Override Method.... In the Generate Override Methods dialog select the clearData() option and click Generate. NetBeans will then override the method and add the implementation to DataAnalyzer.java: @Override public void clearData() { super.clearData(); }  
Read more
  • 0
  • 0
  • 23700

article-image-stack-overflow-survey-data-further-confirms-pythons-popularity-as-it-moves-above-java-in-the-most-used-programming-language-list
Richard Gall
10 Apr 2019
5 min read
Save for later

Stack Overflow survey data further confirms Python's popularity as it moves above Java in the most used programming language list

Richard Gall
10 Apr 2019
5 min read
This year's Stack Overflow Developer Survey results provided a useful insight into how the programming language ecosystem is evolving. Perhaps the most remarkable - if unsurprising - insight was the continued and irresistible rise of Python. This year, for the first time, it finished higher in the rankings than Java. We probably don't need another sign that Python is taking over the world, but this is certainly another one to add to the collection.  What we already know about Python's popularity as a programming language Okay, so the Stack overflow survey results weren't that surprising because Python's growth is well-documented. The language has been shooting up the TIOBE rankings, coming third for the first time back in September 2018. The most recent ranking has seen it slip to fourth (C++ is making a resurgence - but that's a story for another time...), but it isn't in decline - it's still growing. In fact, despite moving back into fourth, it's still growing at the fastest rate of any programming language, with 2.36% growth in its rating. For comparison, C++'s rate of growth in the rankings is 1.62%. But it's not just about TIOBE rankings. Even back in September 2017 the Stack Overflow team were well aware of Python's particularly astonishing growth in high-income countries. Read next: 8 programming languages to learn in 2019 Python's growth in the Stack Overflow survey since 2013 It has been pretty easy to trace the growth in the use of Python through the results of every recent Stack Overflow survey. From 2016, it has consistently been on the up: 2013: 21.9% (6th position in the rankings) 2014: 23.4% (again, 6th position in the rankings) 2015: 23.8% (6th) 2016: 24.9% (6th) 2017: 32% (moving up to 5th...) 2018: 38.8% (down to 7th but with a big percentage increase) 2019: 41.7% (4th position) But more interestingly, it would seem that this growth in usage has been driving demand for it. Let's take a look at how things have changed in the 'most wanted' programming language since 2015 - this is the "percentage of developers who are not developing with the language or technology but have expressed interest in developing with it:"  2015: 14.8% (3rd) 2016: 13.3% (4th) 2017: 20.6% (1st) 2018: 25.1% (1st) 2019: 25.7% (1st) Alongside that, it's also worth considering just how well-loved Python is. A big part of this is probably the fact that Python is so effective for the people using it, and helps them solve the problems they want to solve. Those percentages are growing, even though it didn't take top position this year (this is described by Stack Overflow as the "percentage of developers who are developing with the language or technology and have expressed interest in continuing to develop with it"): 2015: 66.6% (10th position) 2016: 62.5% (9th) 2017: 62.7% (6th) 2018: 68% (3rd) 2019: 73.1% (2nd, this time pipped by Rust to the top spot) What's clear here is that Python has a really strong foothold both in the developer mind share (ie. developers believe it's something worth learning) and in terms of literal language use. Obviously, it's highly likely that both things are related - but whatever the reality, it's good to see that process happening in data from the last half a decade. Read next: 5 blog posts that could make you a better Python programmer What's driving the popularity of Python? The obvious question, then, is why Python is growing so quickly. There are plenty of theories out there, and there are certainly plenty of blog posts on the topic. But ultimately, Python's popularity boils down to a few key things.  Python is a flexible language One of the key reasons for Python's growth is its flexibility. It isn't confined to a specific domain. This would go some way of explaining its growth - because it's not limited to a specific job role or task, a huge range of developers are finding uses for it. This has a knock on effect - because the community of users continues to grow, there is much more impetus on developing tools that can support and facilitate the use of Python in diverse domains. Indeed, with the exception of JavaScript, Python is a language that many  developers experience through its huge range of related tools and libraries.   The growth of data science and machine learning While Python isn't limited to a specific domain, the immense rise in interest in machine learning and data analytics has been integral to Python's popularity. With so much data available to organizations and their employees, Python is a language that allows them to actually leverage it. Read next: Why is Python so good for AI and Machine Learning? 5 Python Experts Explain Python's easy to learn The final key driver of Python's growth is the fact that it is relatively easy to learn. It's actually a pretty good place to begin if you're new to programming.  Going back to the first point, it's precisely because it's flexible that people that might not typically write code or see themselves as developers could see Python as a neat solution to a problem they're trying to solve. Because it's not a particularly steep learning curve, it introduces these people to the foundational elements of programming. Something which can only be a good thing, right? The future of Python It's easy to get excited about Python's growth, but what's particularly intriguing about it is what it can indicate about the wider software landscape. That's perhaps a whole new question, but from a burgeoning army of non-developer professionals powered by Python to every engineer wanting to unlock automation, it would appear that the growth of Python both a response and a symptom of significant changes.
Read more
  • 0
  • 0
  • 23652
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-openstreetmap-gathering-data-using-gps
Packt
23 Sep 2010
19 min read
Save for later

OpenStreetMap: Gathering Data using GPS

Packt
23 Sep 2010
19 min read
  OpenStreetMap Be your own cartographer Collect data for the area you want to map with this OpenStreetMap book and eBook Create your own custom maps to print or use online following our proven tutorials Collaborate with other OpenStreetMap contributors to improve the map data Learn how OpenStreetMap works and why it's different to other sources of geographical information with this professional guide Read more about this book (For more resources on OpenStreetMap, see here.) OpenStreetMap is made possible by two technological advances: Relatively affordable, accurate GPS receivers, and broadband Internet access. Without either of these, the job of building an accurate map from scratch using crowdsourcing would be so difficult that it almost certainly wouldn't work. Much of OpenStreetMap's data is based on traces gathered by volunteer mappers, either while they're going about their daily lives, or on special mapping journeys. This is the best way to collect the source data for a freely redistributable map, as each contributor is able to give their permission for their data to be used in this way. The traces gathered by mappers are used to show where features are, but they're not usually turned directly into a map. Instead, they're used as a backdrop in an editing program, and the map data is drawn by hand on top of the traces. This means you don't have to worry about getting a perfect trace every time you go mapping, or about sticking exactly to paths or roads. Errors are canceled out over time by multiple traces of the same features. OpenStreetMap uses other sources of data than mappers' GPS traces, but they each have their own problems: Out-of-copyright maps are out-of-date, and may be less accurate than modern surveying methods. Aerial imagery needs processing before you can trace it, and it doesn't tell you details such as street names. Eventually, someone has to visit locations in person to verify what exists in a particular place, what it's called, and other details that you can't discern from an aerial photograph If you already own a GPS and are comfortable using it to record traces, you can skip the first section of this article and go straight to Techniques. If you want very detailed information about surveying using GPS, you can read the American Society of Civil Engineers book on the subject, part of which is available on Google Books at http://bit.ly/gpssurveying. Some of the details are out-of-date, but the general principles still hold. If you are already familiar with the general surveying techniques, and are comfortable producing information in GPX format, you can skip most of this article and head straight for the section Adding your traces to OpenStreetMap. What is GPS? GPS stands for Global Positioning System, and in most cases this refers to a system run by the US Department of Defense, properly called NAVSTAR. The generic term for such a system is a Global Navigation Satellite System (GNSS), of which NAVSTAR is currently the only fully operational system. Other equivalent systems are in development by the European Union (Galileo), Russian Federation (GLONASS), and the People's Republic of China (Compass). OpenStreetMap isn't tied to any one GNSS system, and will be able to make use of the others as they become available. The principles of operation of all these systems are essentially the same, so we'll describe how NAVSTAR works at present. NAVSTAR consists of three elements: the space segment, the control segment, and the user segment. The space segment is the constellation of satellites orbiting the Earth. The design of NAVSTAR is for 24 satellites, of which 21 are active and three are on standby. However, there are currently 31 satellites in use, as replacements have been launched without taking old satellites out of commission. Each satellite has a highly accurate atomic clock on board, and all clocks in all satellites are kept synchronized. Each satellite transmits a signal containing the time and its own position in the sky. The control segment is a number of ground stations, including a master control station in Colorado Springs. These stations monitor the signal from the satellites and transmit any necessary corrections back to them. The corrections are necessary because the satellites themselves can stray from their predicted paths. The user segment is your GPS receiver. This receives signals from multiple satellites, and uses the information they contain to calculate your position. Your receiver doesn't transmit any information, and the satellites don't know where you are. The receiver has its own clock, which needs to be synchronized with those in the space segment to perform its calculations. This isn't the case when you first turn it on, and is one of the reasons why it can take time to get a fix. Your GPS receiver calculates your position by receiving messages from a number of satellites, and comparing the time included in each message to its own clock. This allows it to calculate your approximate distance from each satellite, and from that, your position on the Earth. If it uses three satellites, it can calculate your position in two dimensions, giving you your latitude (lat) and longitude (long). With signals from four satellites, it can give you a 3D fix, adding altitude to lat and long. The more satellites your receiver can "see", the more accurate the calculated position will be. Some receivers are able to use signals from up to 12 satellites at once, assuming the view of the satellites isn't blocked by buildings, trees, or people. You're obviously very unlikely to get a GPS fix indoors. Many GPS receivers can calculate the amount of error in your position due to the configuration of satellites you're using. Called the Dilution of Precision (DOP), the number produced gives you an idea of how good a fix you have given the satellites you can get a signal from, and where they are in the sky. The higher the DOP, the less accurate your calculated position is. The precision of a GPS fix improves with the distance between the satellites you're using. If they're close together, such as mostly directly overhead, the DOP will be high. Use signals from satellites spread evenly across the sky, and your position will be more accurate. Which satellites your receiver uses isn't something you can control, but more modern GPS chipsets will automatically try to use the best configuration of satellites available, rather than just those with the strongest signals. DOP only takes into account errors caused by satellite geometry, not other sources of error, so a low DOP isn't a guarantee of absolute accuracy. The system includes the capability to introduce intentional errors into the signal, so that only limited accuracy positioning is available to non-military users. This capability, called Selective Availability (SA) was in use until 1990, when President Clinton ordered it to be disabled. Future NAVSTAR satellites will not have SA capabilities, so the disablement is effectively permanent. The error introduced by SA reduced the horizontal accuracy of a civilian receiver, typically to 10m, but the error could be as high as 100m. Had SA still been in place, it's unlikely that OpenStreetMap would have been as successful. NAVSTAR uses a coordinate system known as WGS84, which defines a spheroid representing the Earth, and a fixed line of longitude or datum from which other longitudes are measured. This datum is very close to, but not exactly the same as the Prime Meridian at Greenwich in South East London. The equator of the spheroid is used as the datum for latitude. Other coordinate systems exist, and you should note that no printed maps use WGS84, but instead use a slightly different system that makes maps of a given area easier to use. Examples of other coordinate systems include the OSGB36 system used by British national grid references. When you create a map from raw geographic data, the latitudes and longitudes are converted to the x and y coordinates of a flat plane using an algorithm called a projection. You've probably heard of the Mercator projection, but there are many others, each of which is suitable for different areas and purposes. What's a GPS trace? A GPS trace or tracklog is simply a record of position over time. It shows where you traveled while you were recording the trace. This information is gathered using a GPS receiver that calculates your position and stores it every so many seconds, depending on how you have configured your receiver. If you record a trace while you're walking along a path, what you get is a trace that shows you where that path is in the world. Plot these points on a graph, and you have the start of a map. Walk along any adjoining paths and plot these on the same graph, and you have something you can use to navigate. If many people generate overlapping traces, eventually you have a fully mapped area. This is the general principle of crowdsourcing geographic data. You can see the result of many combined traces in the following image. This is the junction of the M4 and M25 motorways, to the west of London. The motorways themselves and the slip roads joining them are clearly visible. Traces are used in OpenStreetMap to show where geographical features are, but usually only as a source for drawing over, not directly. They're also regarded as evidence that a mapper has actually visited the area in question, and not just copied the details from another copyrighted map. Most raw GPS traces aren't suitable to be made directly into maps, because they contain too many points for a given feature, will drift relative to a feature's true position, and you'll also take an occasional detour. Although consumer-grade GPS receivers are less accurate than those used by professional surveyors, if enough traces of the same road or path are gathered, the average of these traces will be very close to the feature's true position. OpenStreetMap allows mappers to make corrections to the data over time as more accurate information becomes available. In addition to your movements, most GPS receivers allow you to record specific named points, often called waypoints. These are useful for recording the location of point features, such as post boxes, bus stops, and other amenities. We'll cover ways of using waypoints later in the article. What equipment do I need? To collect traces suitable for use in OpenStreetMap, you'll need some kind of GPS receiver that's capable of recording a log of locations over time, known as a track log, trace, or breadcrumb trail. This could be a hand-held GPS receiver, a bicycle-mounted unit, a combination of a GPS receiver and a smartphone, or in some cases a vehicle satellite navigation system. There are also some dedicated GPS logger units, which don't provide any navigation function, but merely record a track log for later processing. You'll also need some way of getting the recorded traces off your receiver and onto your PC. This could be a USB or serial cable, a removable memory card, or possibly a Bluetooth connection. There are reviews of GPS units by mappers in the OpenStreetMap wiki. There are also GPS receivers designed specifically for surveying, which have very sensitive antennas and link directly into geographic information systems (GIS). These tend to be very expensive and less portable than consumer-grade receivers. However, they're capable of producing positioning information accurate to a few centimeters rather than meters. You also need a computer connected to the Internet. A broadband connection is best, as once you start submitting data to OpenStreetMap, you will probably end up downloading lots of map tiles. It is possible to gather traces and create mapping data while disconnected from the Internet, but you will need to upload your data and see the results at some point. OpenStreetMap data itself is usually represented in Extensible Markup Language (XML) format, and can be compressed into small files. The computer itself can be almost any kind, as long as it has a web browser, and can run one of the editors, which Windows, Mac OS X, and Linux all can. You'll probably need some other kit while mapping to record additional information about the features you're mapping. Along with recording the position of each feature you map, you'll need to note things such as street names, route numbers, types of shops, and any other information you think is relevant. While this information won't be included in the traces you upload on openstreetmap.org, you'll need it later on when you're editing the map. Remember that you can't look up any details you miss on another map without breaking copyright, so it's important to gather all the information you need to describe a feature yourself. A paper notebook and pencil is the most obvious way of recording the extra information. They are inexpensive and simple to use, and have no batteries to run out. However, it's difficult to use on a bike, and impossible if you're driving, so using this approach can slow down mapping. A voice recorder is more expensive, but easier to use while still moving. Record a waypoint on your GPS receiver, and then describe what that waypoint represents in a voice recording. If you have a digital voice recorder, you can download the notes onto your PC to make them easier to use, and JOSM—the Java desktop editing application—has a support for audio mapping built-in. A digital camera is useful for capturing street names and other details, such as the layout of junctions. Some recent cameras have their own built-in GPS, and others can support an external receiver, and will add the latitude, longitude, and possibly altitude, often known as geotags, to your pictures automatically. For those that don't, you can still use the timestamp on the photo to match it to a location in your GPS traces. We'll cover this later in the article. Some mappers have experimented with video recordings while mapping, but the results haven't been encouraging so far. Some of the problems with video mapping are: It's difficult to read street signs on zoomed-out video images, and zooming in on signs is impractical. If you're recording while driving or riding a bike, the camera can only point in one direction at once, while the details you want to record may be in a different direction. It's difficult to index recordings when using consumer video cameras, so you need to play the recording back in real time to extract the information, a slow process. Automatic processing of video recordings taken with multiple cameras would make the process easier, but this is currently beyond what volunteer mappers are able to afford. Smartphones can combine several of these functions, and some include their own GPS receiver. For those that don't, or where the internal GPS isn't very good, you can use an external Bluetooth GPS module. Several applications have been developed that make the process of gathering traces and other information on a smartphone easier. Look on the Smartphones page on the OpenStreetMap wiki at http://wiki.openstreetmap.org/wiki/Smartphones. Making your first trace Before you set off on a long surveying trip, you should familiarize yourself with the methods involved in gathering data for OpenStreetMap. This includes the basic operation of your GPS receiver, and the accompanying note-taking. Configuring your GPS receiver The first thing to make sure is that your GPS is using the W GS84 coordinate system. Many receivers also include a local coordinate system in their settings to make them easier to use with printed maps. So check in your settings which system you're getting your location in. OpenStreetMap only uses WGS84, so if you record your traces in the wrong system, you could end up placing features tens or even hundreds of meters away from their true location. Next, you should set the recording frequency as high as it will go. You need your GPS to record as much detail as possible, so setting it to record your location as often as possible will make your traces better. Some receivers can record a point once per second; if yours doesn't, it's not a problem, but use the highest setting (shortest interval) possible. Some receivers also have a "smart" mode that only records points where you've changed direction significantly, which is fine for navigation, but not for turning into a map. If your GPS has this, you'll need to disable it. One further setting on some GPSs is to only record a point every so many metres, irrespective of how much time has elapsed. Turning this on can be useful if you're on foot and taking it easy, but otherwise keep it turned off. Another setting to check, particularly if you're using a vehicle satellite navigation system, is "snap to streets" or a similar name. When your receiver has this setting on, your position will always be shown as being on a street or a path in its database, even if your true position is some way off. This causes two problems for OpenStreetMap: if you travel down a road that isn't in your receiver's database, its position won't be recorded, and the data you do collect is effectively derived from the database, which not only breaks copyright, but also reproduces any errors in that database. Next, you need to know how to start and stop recording. Some receivers can record constantly while they're turned on, but many will need you to start and stop the process. Smartphone-based recorder software will definitely require starting and stopping. If you're using a smartphone with an external Bluetooth GPS module, you may also need to pair the devices and configure the receiver in your software. Once you're happy with your settings, you can have a trial run. Make a journey you have to make anyway, or take a short trip to the shops and back (or some other reasonably close landmark if you don't live near shops). It's important that you're familiar with your test area, as you'll use your local knowledge to see how accurate your results are. Checking the quality of your traces When you return, get the trace you've recorded off your receiver, and take a look at it on your PC using an OpenStreetMap editor or by uploading the trace. Now, look at the quality of the trace. Some things to look out for are, as follows: Are lines you'd expect to be straight actually straight, or do they have curves or deviations in them? A good trace reflects the shape of the area you surveyed, even if the positioning isn't 100% accurate. I f you went a particular way twice during your trip, how well do the two parts of the trace correspond? Ideally, they should be parallel and within a few meters from each other. When you change direction, does the trace reflect that change straight away, or does your recorded path continue in the same direction and gradually turn to your new heading? If you've recorded any waypoints, how close are they to the trace? They should ideally be directly on top of the trace, but certainly no more than a few meters away. The previous image shows a low-quality GPS trace. If you look at the raw trace on the left, you can see a few straight lines and differences in traces of the same area. The right-hand side shows the trace with the actual map data for the area, showing how they differ. In this image, we see a high-quality GPS trace. This trace was taken by walking along each side of the road where possible. Note that the traces are straight and parallel, reflecting the road layout. The quality of the traces makes correctly turning them into data much easier. If you notice these problems in your test trace, you may need to alter where you keep your GPS while you're mapping. Sometimes, inaccuracy is a result of the make-up of the area you're trying to map, and nothing will change that, short of using a more sensitive GPS. For the situations where that's not the case, the following are some tips on improving accuracy. Making your traces more accurate You can dramatically improve the accuracy of your traces by putting your GPS where it can get a good signal. Remember that it needs to have a good signal all the time, so even if you seem to get a good signal while you're looking at your receiver, it could drop in strength when you put it away. If you're walking, the best position is in the top pocket of a rucksack, or attached to the shoulder strap. Having your GPS in a pocket on your lower body will seriously reduce the accuracy of your traces, as your body will block at least half of the sky. If you're cycling, a handlebar mount for your GPS will give it a good view of the sky, while still making it easy to add waypoints. A rucksack is another option. In a vehicle, it's more difficult to place your GPS where it will be able to see most of the sky. External roof-mounted GPS antennas are available, but they're not cheap and involve drilling a hole in the roof of your car. The best location is as far forward on your dashboard as possible, but be aware some modern car windscreens contain metal, and may block GPS signals. In this case, you may be able to use the rear parcel shelf, or a side window providing you can secure your GPS. Don't start moving until you have a good fix. Although most GPS receivers can get a fix while you're moving, it will take longer and may be less accurate. More recent receivers have a "warm start" feature where they can get a fix much faster by caching positioning data from satellites. You also need to avoid bias in your traces. This can occur when you tend to use one side of a road more than the other, either because of the route you normally take, or because there is only a pavement on one side of the road. The result of this is that the traces you collect will be off-center of the road's true position by a few meters. This won't matter at first, and will be less of a problem in less densely-featured areas, but in high-density residential areas, this could end up distorting the map slightly.
Read more
  • 0
  • 0
  • 23594

article-image-microservices-and-service-oriented-architecture
Packt
09 Mar 2017
6 min read
Save for later

Microservices and Service Oriented Architecture

Packt
09 Mar 2017
6 min read
Microservices are an architecture style and an approach for software development to satisfy modern business demands. They are not a new invention as such. They are instead an evolution of previous architecture styles. Many organizations today use them - they can improve organizational agility, speed of delivery, and ability to scale. Microservices give you a way to develop more physically separated modular applications. This tutorial has been taken from Spring 5.0 Microsevices - Second Edition Microservices are similar to conventional service-oriented architectures. In this article, we will see how microservices are related to SOA. The emergence of microservices Many organizations, such as Netflix, Amazon, and eBay, successfully used what is known as the 'divide and conquer' technique to functionally partition their monolithic applications into smaller atomic units. Each one performs a single function - a 'service'. These organizations solved a number of prevailing issues they were experiencing with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern as microservices architecture. Microservices originated from the idea of Hexagonal Architecture, coined by Alistair Cockburn back in 2005. Hexagonal Architecture or Hexagonal pattern is also known as the Ports and Adapters pattern. Cockburn defined microservices as: "...an architectural style or an approach for building IT systems as a set of business capabilities that are autonomous, self contained, and loosely coupled." The following diagram depicts a traditional N-tier application architecture having presentation layer, business layer, and database layer: Modules A, B, and C represent three different business capabilities. The layers in the diagram represent separation of architecture concerns. Each layer holds all three business capabilities pertaining to that layer. Presentation layer has web components of all three modules, business layer has business components of all three modules, and database hosts tables of all three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservice-based architecture: As we can see in the preceding diagram, the boundaries are inversed in the microservices architecture. Each vertical slice represents a microservice. Each microservice will have its own presentation layer, business layer, and database layer. Microservices is aligned toward business capabilities. By doing so, changes to one microservice do not impact the others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols, such as HTTP and REST, or messaging protocols, such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols, such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices is more aligned to the business capabilities and has independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two facets of microservices. How do microservices compare to Service Oriented Architectures? One of the common question arises when dealing with microservices architecture is, how is it different from SOA. SOA and microservices follow similar concepts. Earlier in this article, we saw that microservices is evolved from SOA and many service characteristics that are common in both approaches. However, are they the same or different? As microservices evolved from SOA, many characteristics of microservices is similar to SOA. Let’s first examine the definition of SOA. The Open Group definition of SOA is as follows: "SOA is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services. Is self-contained May be composed of other services Is a “black box” to consumers of the service" You have learned similar aspects in microservices as well. So, in what way is microservices different? The answer is--it depends. The answer to the previous question could be yes or no, depending upon the organization and its adoption of SOA. SOA is a broader term and different organizations approached SOA differently to solve different organizational problems. The difference between microservices and SOA is in the way based on how an organization approaches SOA. In order to get clarity, a few cases will be examined here. Service oriented integration Service-oriented integration refers to a service-based integration approach used by many organizations: Many organizations would have used SOA primarily to solve their integration complexities, also known as integration spaghetti. Generally, this is termed as Service Oriented Integration (SOI). In such cases, applications communicate with each other through a common integration layer using standard protocols and message formats, such as SOAP/XML-based web services over HTTP or Java Message Service (JMS). These types of organizations focus on Enterprise Integration Patterns (EIP) to model their integration requirements. This approach strongly relies on heavyweight Enterprise Service Bus (ESB),such as TIBCO Business Works, WebSphere ESB, Oracle ESB, and the likes. Most of the ESB vendors also packed a set of related product, such as Rules Engines, Business Process Management Engines, and so on as a SOA suite. Such organization's integrations are deeply rooted into these products. They either write heavy orchestration logic in the ESB layer or business logic itself in the service bus. In both cases, all enterprise services are deployed and accessed through the ESB. These services are managed through an enterprise governance model. For such organizations, microservices is altogether different from SOA. Legacy modernization SOA is also used to build service layers on top of legacy applications which is shown in the following diagram: Another category of organizations would have used SOA in transformation projects or legacy modernization projects. In such cases, the services are built and deployed in the ESB connecting to backend systems using ESB adapters. For these organizations, microservices are different from SOA. Service oriented application Some organizations would have adopted SOA at an application level: In this approach as shown in the preceding diagram, lightweight Integration frameworks, such as Apache Camel or Spring Integration, are embedded within applications to handle service related cross-cutting capabilities, such as protocol mediation, parallel execution, orchestration, and service integration. As some of the lightweight integration frameworks had native Java object support, such applications would have even used native Plain Old Java Objects (POJO) services for integration and data exchange between services. As a result, all services have to be packaged as one monolithic web archive. Such organizations could see microservices as the next logical step of their SOA. Monolithic migration using SOA The following diagram represents Logical System Boundaries: The last possibility is transforming a monolithic application into smaller units after hitting the breaking point with the monolithic system. They would have broken the application into smaller physically deployable subsystems, similar to the Y axis scaling approach explained earlier and deployed them as web archives on web servers or as jars deployed on some home grown containers. These subsystems as service would have used web services or other lightweight protocols to exchange data between services. They would have also used SOA and service design principles to achieve this. For such organizations, they may tend to think that microservices is the same old wine in a new bottle. Further resources on this subject: Building Scalable Microservices [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 23449

article-image-github-sponsors-could-corporate-strategy-eat-foss-culture-for-dinner
Sugandha Lahoti
24 May 2019
4 min read
Save for later

Github Sponsors: Could corporate strategy eat FOSS culture for dinner?

Sugandha Lahoti
24 May 2019
4 min read
Yesterday, at the GitHub Satellite event 2019, GitHub launched probably its most game-changing yet debatable feature - Sponsors. GitHub Sponsors works exactly like Patreon, in the sense that developers can sponsor the efforts of a contributor "seamlessly through their GitHub profiles". Developers will be able to opt into having a “Sponsor me” button on their GitHub repositories and open-source projects where they will be able to highlight their funding models. GitHub shared that they will cover payment processing fees for the first 12 months of the program to celebrate the launch. “100% percent of your sponsorship goes to the developer," GitHub wrote in an announcement. At launch, this feature is marked as "wait list" and is currently in beta. To start off this program, the code hosting site has also launched GitHub Sponsors Matching Fund. This means that it will match all contributions up to $5,000 during a developer’s first year in GitHub Sponsors. GitHub sponsors could prove beneficial for developers working on open source software, that isn't profitable. This way they can easily raise money from GitHub directly which is the leading repository for open-source software. More importantly, GitHub sponsors is not just limited to software developers, but all open-source contributors, including those who write documentation, provide leadership or mentor new developers, for example. This and the promising zero fees to use the program has got people excited. https://twitter.com/rauchg/status/1131807348820008960 https://twitter.com/EricaJoy/status/1131640959886741504 While on the flip side, GitHub Sponsors can also limit the essence of what open source is, by financially influencing developers on what they will work on. It may drive open-source developers to focus on projects that are more likely to attract financial contributions over projects which are more interesting and challenging but aren’t likely to find financial backers on GitHub. This can hurt FOSS contributions as people start to expect to be paid rather than doing it for inherent motivations. This, in turn, could lead to toxic politics among project contributors regarding who gets credit and who gets paid. Companies could also use GitHub sponsorships to judge the health of open source projects. People are also speculating that this can possibly be Microsoft’s (GitHub’s parent company) strategy to centralize and enclose open source community dynamics, as well as benefit from its monetization. Some are also wondering the plausible effects of monetization on OSS, which can possibly lead to mega corporations profiteering off free labor, thus changing the original vision of an open source community. https://twitter.com/andrestaltz/status/1131521807876591616 Andre Staltz also made an interesting point about the potential on the zero fee model driving out other open source payment models from existence. He believes once Microsoft’s dominance is achieved Github's commissions could go up. https://twitter.com/andrestaltz/status/1131526433027837952 A Hacker News user also conjectured that this may also get Microsoft access to data on top-notch developers. “Will this mean that Microsoft gets a bunch of PII on top-notch developers (have to enter name + address info to receive or send payments), and get much more value from that data than I can imagine?” At present GitHub is offering this feature as an invite-only beta with a waitlist, it will be interesting to see if and how this will change the dynamics of open source collaboration, once it rolls out fully. A tweet observes: “I think it bears repeating that the path to FOSS sustainability is not individuals funding projects. We will only reach sustainability when the companies making profit off our work are returning value to the Commons.” Read our full coverage on GitHub Satellite here. To know more about GitHub sponsors, visit the official blog. GitHub Satellite 2019 focuses on community, security, and enterprise GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval
Read more
  • 0
  • 0
  • 23414

article-image-cloud-native-applications
Packt
09 Feb 2017
5 min read
Save for later

Cloud Native Applications

Packt
09 Feb 2017
5 min read
In this article by Ranga Rao Karanam, the author of the book Mastering Spring, we will see what are Cloud Native applications and Twelve Factor App. (For more resources related to this topic, see here.) Cloud Native applications Cloud is disrupting the world. A number of possibilities emerge that were never possible before. Organizations are able to provision computing, network and storage devices on demand. This has high potential to reduce costs in a number of industries. Consider the retail industry where there is high demand in pockets (Black Friday, Holiday Season and so on). Why should they pay for hardware round the year when they could provision it on demand? While we would like to be benefit from the possibilities of the cloud, these possibilities are limited by architecture and the nature of applications. How do we build applications that can be easily deployed on the cloud? That's where Cloud Native applications come into picture. Cloud Native applications are those that can easily be deployed on the cloud. These applications share a few common characteristics. We will begin with looking at Twelve Factor App - A combination of common patterns among Cloud Native applications. Twelve Factor App Twelve Factor App evolved from experiences of engineers at Heroku. This is a list of patterns that are typically used in Cloud Native application architectures. It is important to note, that an App here refers to a single deployable unit. Essentially every microservice is an App (because each microservice is independently deployable). One codebase Each App has one codebase in revision control. There can be multiple environments where the App can be deployed. However, all these environments use code from a single codebase. An example for anti-pattern is building a deployable from multiple codebases. Dependencies Explicitly declare and isolate dependencies. Typical Java applications use build management tools like Maven and Gradle to isolate and track dependencies. The following screenshot shows the typical Java applications managing dependencies using Maven: The following screenshot shows the content of the file: Config All applications have configuration that varies from one environment to another environment. Configuration is typically littered at multiple locations - Application code, property files, databases, environment variables, Java Naming and Directory Interface (JNDI) and system variables are a few examples. A Twelve Factor App should store config in the environment. While environment variables are recommended to manage configuration in a Twelve Factor App, other alternatives like having a centralized repository for application configuration should be considered for more complex systems. Irrespective of mechanism used, we recommended to manage configuration outside application code (independent of the application deployable unit). Use one standardized way of configuration Backing services Typically applications depend on other services being available - data-stores, external services among others. Twelve Factor App treats backing services as attached resources. A backing service is typically declared via an external configuration. Loose coupling to a backing service has many advantages including ability to gracefully handle an outage of a backing service. Build, release, run Strictly separate build and run stages. Build: Creates an executable bundle (ear, war or jar) from code and dependencies that can be deployed to multiple environments. Release: Combine the executable bundle with specific environment configuration to deploy in an environment. Run: Run the application in an execution environment using a specific release An anti-pattern is to build separate executable bundles specific for each environment. Stateless A Twelve Factor App does not have state. All data that it needs is stored in a persistent store. An anti-pattern is a sticky session. Port binding A Twelve Factor App exposes all services using port binding. While it is possible to have other mechanisms to expose services, these mechanisms are implementation dependent. Port binding gives full control of receiving and handling messages irrespective of where an application is deployed. Concurrency A Twelve Factor App is able to achieve more concurrency by scaling out horizontally. Scaling vertically has its limits. Scaling out horizontally provides opportunities to expand without limits. Disposability A Twelve Factor App should promote elastic scaling. Hence, they should be disposable. They can be started and stopped when needed. A Twelve Factor App should: Have minimum start up time. Long start up times means long delay before an application can take requests. Shutdown gracefully. Handle hardware failures gracefully. Environment parity All the environments - development, test, staging, and production - should be similar. They should use same processes and tools. With continuous deployment, they should have similar code very frequently. This makes finding and fixing problems easier. Logs as event streams Visibility is critical to a Twelve Factor App. Since applications are deployed on the cloud and are automatically scaled, it is important to have a centralized visibility into what's happening across different instances of the applications. Treating all logs as stream enables routing of the log stream to different destinations for viewing and archival. This stream can be used to debug issues, perform analytics and create alerting systems based on error patterns. No distinction of admin processes Twelve Factor Apps treat administrative tasks (migrations, scripts) similar to normal application processes. Summary This article thus explains about Cloud Native applications and what are Twelve Factor Apps. Resources for Article: Further resources on this subject: Cloud and Async Communication [article] Setting up of Software Infrastructure on the Cloud [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 23211
article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 23119

article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 22949

article-image-implementing-c-libraries-in-delphi-for-hpc-tutorial
Pavan Ramchandani
24 Jul 2018
16 min read
Save for later

Implementing C++ libraries in Delphi for HPC [Tutorial]

Pavan Ramchandani
24 Jul 2018
16 min read
Using C object files in Delphi is hard but possible. Linking to C++ object files is, however, nearly impossible. The problem does not lie within the object files themselves but in C++. While C is hardly more than an assembler with improved syntax, C++ represents a sophisticated high-level language with runtime support for strings, objects, exceptions, and more. All these features are part of almost any C++ program and are as such compiled into (almost) any object file produced by C++. In this tutorial, we will leverage various C++ libraries that enable high-performance with Delphi. It starts with memory management, which is an important program for any high performance applications. The article is an excerpt from a book written by Primož Gabrijelčič, titled Delphi High Performance. The problem here is that Delphi has no idea how to deal with any of that. C++ object is not equal to a Delphi object. Delphi has no idea how to call functions of a C++ object, how to deal with its inheritance chain, how to create and destroy such objects, and so on. The same holds for strings, exceptions, streams, and other C++ concepts. If you can compile the C++ source with C++Builder then you can create a package (.bpl) that can be used from a Delphi program. Most of the time, however, you will not be dealing with a source project. Instead, you'll want to use a commercial library that only gives you a bunch of C++ header files (.h) and one or more static libraries (.lib). Most of the time, the only Windows version of that library will be compiled with Microsoft's Visual Studio. A more general approach to this problem is to introduce a proxy DLL created in C++. You will have to create it in the same development environment as was used to create the library you are trying to link into the project. On Windows, that will in most cases be Visual Studio. That will enable us to include the library without any problems. To allow Delphi to use this DLL (and as such use the library), the DLL should expose a simple interface in the Windows API style. Instead of exposing C++ objects, the API must expose methods implemented by the objects as normal (non-object) functions and procedures. As the objects cannot cross the API boundary we must find some other way to represent them on the Delphi side. Instead of showing how to write a DLL wrapper for an existing (and probably quite complicated) C++ library, I have decided to write a very simple C++ library that exposes a single class, implementing only two methods. As compiling this library requires Microsoft's Visual Studio, which not all of you have installed, I have also included the compiled version (DllLib1.dll) in the code archive. The Visual Studio solution is stored in the StaticLib1 folder and contains two projects. StaticLib1 is the project used to create the library while the Dll1 project implements the proxy DLL. The static library implements the CppClass class, which is defined in the header file, CppClass.h. Whenever you are dealing with a C++ library, the distribution will also contain one or more header files. They are needed if you want to use a library in a C++ project—such as in the proxy DLL Dll1. The header file for the demo library StaticLib1 is shown in the following. We can see that the code implements a single CppClass class, which implements a constructor (CppClass()), destructor (~CppClass()), a method accepting an integer parameter (void setData(int)), and a function returning an integer (int getSquare()). The class also contains one integer private field, data: #pragma once class CppClass { int data; public: CppClass(); ~CppClass(); void setData(int); int getSquare(); }; The implementation of the CppClass class is stored in the CppClass.cpp file. You don't need this file when implementing the proxy DLL. When we are using a C++ library, we are strictly coding to the interface—and the interface is stored in the header file. In our case, we have the full source so we can look inside the implementation too. The constructor and destructor don't do anything and so I'm not showing them here. The other two methods are as follows. The setData method stores its parameter in the internal field and the getSquare function returns the squared value of the internal field: void CppClass::setData(int value) { data = value; } int CppClass::getSquare() { return data * data; } This code doesn't contain anything that we couldn't write in 60 seconds in Delphi. It does, however, serve as a perfect simple example for writing a proxy DLL. Creating such a DLL in Visual Studio is easy. You just have to select File | New | Project, and select the Dynamic-Link Library (DLL) project type from the Visual C++ | Windows Desktop branch. The Dll1 project from the code archive has only two source files. The file, dllmain.cpp was created automatically by Visual Studio and contains the standard DllMain method. You can change this file if you have to run project-specific code when a program and/or a thread attaches to, or detaches from, the DLL. In my example, this file was left just as the Visual Studio created it. The second file, StaticLibWrapper.cpp fully implements the proxy DLL. It starts with two include lines (shown in the following) which bring in the required RTL header stdafx.h and the header definition for our C++ class, CppClass.h: #include "stdafx.h" #include "CppClass.h" The proxy has to be able to find our header file. There are two ways to do that. We could simply copy it to the folder containing the source files for the DLL project, or we can add it to the project's search path. The second approach can be configured in Project | Properties | Configuration Properties | C/C++ | General | Additional Include Directories. This is also the approach used by the demonstration program. The DLL project must be able to find the static library that implements the CppClass object. The path to the library file should be set in project options, in the Configuration Properties | Linker | General | Additional Library Directories settings. You should put the name of the library (StaticLib1.lib) in the Linker | Input | Additional Dependencies settings. The next line in the source file defines a macro called EXPORT, which will be used later in the program to mark a function as exported. We have to do that for every DLL function that we want to use from the Delphi code. Later, we'll see how this macro is used: #define EXPORT comment(linker, "/EXPORT:" __FUNCTION__ "=" __FUNCDNAME__) The next part of the StaticLibWrapper.cpp file implements an IndexAllocator class, which is used internally to cache C++ objects. It associates C++ objects with simple integer identifiers, which are then used outside the DLL to represent the object. I will not show this class in the book as the implementation is not that important. You only have to know how to use it. This class is implemented as a simple static array of pointers and contains at most MAXOBJECTS objects. The constant MAXOBJECTS is set to 100 in the current code, which limits the number of C++ objects created by the Delphi code to 100. Feel free to modify the code if you need to create more objects. The following code fragment shows three public functions implemented by the IndexAllocator class. The Allocate function takes a pointer obj, stores it in the cache, and returns its index in the deviceIndex parameter. The result of the function is FALSE if the cache is full and TRUE otherwise. The Release function accepts an index (which was previously returned from Allocate) and marks the cache slot at that index as empty. This function returns FALSE if the index is invalid (does not represent a value returned from Allocate) or if the cache slot for that index is already empty. The last function, Get, also accepts an index and returns the pointer associated with that index. It returns NULL if the index is invalid or if the cache slot for that index is empty: bool Allocate(int& deviceIndex, void* obj) bool Release(int deviceIndex) void* Get(int deviceIndex) Let's move now to functions that are exported from the DLL. The first two—Initialize and Finalize—are used to initialize internal structures, namely the GAllocator of type IndexAllocator and to clean up before the DLL is unloaded. Instead of looking into them, I'd rather show you the more interesting stuff, namely functions that deal with CppClass. The CreateCppClass function creates an instance of CppClass, stores it in the cache, and returns its index. The important three parts of the declaration are: extern "C", WINAPI, and #pragma EXPORT. extern "C" is there to guarantee that CreateCppClass name will not be changed when it is stored in the library. The C++ compiler tends to mangle (change) function names to support method overloading (the same thing happens in Delphi) and this declaration prevents that. WINAPI changes the calling convention from cdecl, which is standard for C programs, to stdcall, which is commonly used in DLLs. Later, we'll see that we also have to specify the correct calling convention on the Delphi side. The last important part, #pragma EXPORT, uses the previously defined EXPORT macro to mark this function as exported. The CreateCppClass returns 0 if the operation was successful and -1 if it failed. The same approach is used in all functions exported from the demo DLL: extern "C" int WINAPI CreateCppClass (int& index) { #pragma EXPORT CppClass* instance = new CppClass; if (!GAllocator->Allocate(index, (void*)instance)) { delete instance; return -1; } else return 0; } Similarly, the DestroyCppClass function (not shown here) accepts an index parameter, fetches the object from the cache, and destroys it. The DLL also exports two functions that allow the DLL user to operate on an object. The first one, CppClass_setValue, accepts an index of the object and a value. It fetches the CppClass instance from the cache (given the index) and calls its setData method, passing it the value: extern "C" int WINAPI CppClass_setValue(int index, int value) { #pragma EXPORT CppClass* instance = (CppClass*)GAllocator->Get(index); if (instance == NULL) return -1; else { instance->setData(value); return 0; } } The second function, CppClass_getSquare also accepts an object index and uses it to access the CppClass object. After that, it calls the object's getSquare function and stores the result in the output parameter, value: extern "C" int WINAPI CppClass_getSquare(int index, int& value) { #pragma EXPORT CppClass* instance = (CppClass*)GAllocator->Get(index); if (instance == NULL) return -1; else { value = instance->getSquare(); return 0; } } A proxy DLL that uses a mapping table is a bit complicated and requires some work. We could also approach the problem in a much simpler manner—by treating an address of an object as its external identifier. In other words, the CreateCppClass function would create an object and then return its address as an untyped pointer type. A CppClass_getSquare, for example, would accept this pointer, cast it to a CppClass instance, and execute an operation on it. An alternative version of these two methods is shown in the following: extern "C" int WINAPI CreateCppClass2(void*& ptr) { #pragma EXPORT ptr = new CppClass; return 0; } extern "C" int WINAPI CppClass_getSquare2(void* index, int& value) { #pragma EXPORT value = ((CppClass*)index)->getSquare(); return 0; } This approach is simpler but offers far less security in the form of error checking. The table-based approach can check whether the index represents a valid value, while the latter version cannot know if the pointer parameter is valid or not. If we make a mistake on the Delphi side and pass in an invalid pointer, the code would treat it as an instance of a class, do some operations on it, possibly corrupt some memory, and maybe crash. Finding the source of such errors is very hard. That's why I prefer to write more verbose code that implements some safety checks on the code that returns pointers. Using a proxy DLL in Delphi To use any DLL from a Delphi program, we must firstly import functions from the DLL. There are different ways to do this—we could use static linking, dynamic linking, and static linking with delayed loading. There's plenty of information on the internet about the art of DLL writing in Delphi so I won't dig into this topic. I'll just stick with the most modern approach—delay loading. The code archive for this book includes two demo programs, which demonstrate how to use the DllLib1.dll library. The simpler one, CppClassImportDemo uses the DLL functions directly, while CppClassWrapperDemo wraps them in an easy-to-use class. Both projects use the CppClassImport unit to import the DLL functions into the Delphi program. The following code fragment shows the interface part of that unit which tells the Delphi compiler which functions from the DLL should be imported, and what parameters they have. As with the C++ part, there are three important parts to each declaration. Firstly, the stdcall specifies that the function call should use the stdcall (or what is known in C as  WINAPI) calling convention. Secondly, the name after the name specifier should match the exported function name from the C++ source. And thirdly, the delayed keyword specifies that the program should not try to find this function in the DLL when it is started but only when the code calls the function. This allows us to check whether the DLL is present at all before we call any of the functions: const CPP_CLASS_LIB = 'DllLib1.dll'; function Initialize: integer; stdcall; external CPP_CLASS_LIB name 'Initialize' delayed; function Finalize: integer; stdcall; external CPP_CLASS_LIB name 'Finalize' delayed; function CreateCppClass(var index: integer): integer; stdcall; external CPP_CLASS_LIB name 'CreateCppClass' delayed; function DestroyCppClass(index: integer): integer; stdcall; external CPP_CLASS_LIB name 'DestroyCppClass' delayed; function CppClass_setValue(index: integer; value: integer): integer; stdcall; external CPP_CLASS_LIB name 'CppClass_setValue' delayed; function CppClass_getSquare(index: integer; var value: integer): integer; stdcall; external CPP_CLASS_LIB name 'CppClass_getSquare' delayed; The implementation part of this unit (not shown here) shows how to catch errors that occur during delayed loading—that is, when the code that calls any of the imported functions tries to find that function in the DLL. If you get an External exception C06D007F  exception when you try to call a delay-loaded function, you have probably mistyped a name—either in C++ or in Delphi. You can use the tdump utility that comes with Delphi to check which names are exported from the DLL. The syntax is tdump -d <dll_name.dll>. If the code crashes when you call a DLL function, check whether both sides correctly define the calling convention. Also check if all the parameters have correct types on both sides and if the var parameters are marked as such on both sides. To use the DLL, the code in the CppClassMain unit firstly calls the exported Initialize function from the form's OnCreate handler to initialize the DLL. The cleanup function, Finalize is called from the OnDestroy handler to clean up the DLL. All parts of the code check whether the DLL functions return the OK status (value 0): procedure TfrmCppClassDemo.FormCreate(Sender: TObject); begin if Initialize <> 0 then ListBox1.Items.Add('Initialize failed') end; procedure TfrmCppClassDemo.FormDestroy(Sender: TObject); begin if Finalize <> 0 then ListBox1.Items.Add('Finalize failed'); end; When you click on the Use import library button, the following code executes. It uses the DLL to create a CppClass object by calling the CreateCppClass function. This function puts an integer value into the idxClass value. This value is used as an identifier that identifies a CppClass object when calling other functions. The code then calls CppClass_setValue to set the internal field of the CppClass object and CppClass_getSquare to call the getSquare method and to return the calculated value. At the end, DestroyCppClass destroys the CppClass object: procedure TfrmCppClassDemo.btnImportLibClick(Sender: TObject); var idxClass: Integer; value: Integer; begin if CreateCppClass(idxClass) <> 0 then ListBox1.Items.Add('CreateCppClass failed') else if CppClass_setValue(idxClass, SpinEdit1.Value) <> 0 then ListBox1.Items.Add('CppClass_setValue failed') else if CppClass_getSquare(idxClass, value) <> 0 then ListBox1.Items.Add('CppClass_getSquare failed') else begin ListBox1.Items.Add(Format('square(%d) = %d', [SpinEdit1.Value, value])); if DestroyCppClass(idxClass) <> 0 then ListBox1.Items.Add('DestroyCppClass failed') end; end; This approach is relatively simple but long-winded and error-prone. A better way is to write a wrapper Delphi class that implements the same public interface as the corresponding C++ class. The second demo, CppClassWrapperDemo contains a unit CppClassWrapper which does just that. This unit implements a TCppClass class, which maps to its C++ counterpart. It only has one internal field, which stores the index of the C++ object as returned from the CreateCppClass function: type TCppClass = class strict private FIndex: integer; public class procedure InitializeWrapper; class procedure FinalizeWrapper; constructor Create; destructor Destroy; override; procedure SetValue(value: integer); function GetSquare: integer; end; I won't show all of the functions here as they are all equally simple. One—or maybe two— will suffice. The constructor just calls the CreateCppClass function, checks the result, and stores the resulting index in the internal field: constructor TCppClass.Create; begin inherited Create; if CreateCppClass(FIndex) <> 0 then raise Exception.Create('CreateCppClass failed'); end; Similarly, GetSquare just forwards its job to the CppClass_getSquare function: function TCppClass.GetSquare: integer; begin if CppClass_getSquare(FIndex, Result) <> 0 then raise Exception.Create('CppClass_getSquare failed'); end; When we have this wrapper, the code in the main unit becomes very simple—and very Delphi-like. Once the initialization in the OnCreate event handler is done, we can just create an instance of the TCppClass and work with it: procedure TfrmCppClassDemo.FormCreate(Sender: TObject); begin TCppClass.InitializeWrapper; end; procedure TfrmCppClassDemo.FormDestroy(Sender: TObject); begin TCppClass.FinalizeWrapper; end; procedure TfrmCppClassDemo.btnWrapClick(Sender: TObject); var cpp: TCppClass; begin cpp := TCppClass.Create; try cpp.SetValue(SpinEdit1.Value); ListBox1.Items.Add(Format('square(%d) = %d', [SpinEdit1.Value, cpp.GetSquare])); finally FreeAndNil(cpp); end; end; To summarize, we learned about the C/C++ library that provides a solution for high-performance computing working with Delphi as the primary language. If you found this post useful, do check out the book Delphi High Performance to learn more about the intricacies of how to perform High-performance programming with Delphi. Exploring the Usages of Delphi Delphi: memory management techniques for parallel programming Delphi Cookbook
Read more
  • 0
  • 0
  • 22852
article-image-design-patterns-out-there-and-setting-your-environment
Packt
14 Jan 2016
27 min read
Save for later

The Design Patterns Out There and Setting Up Your Environment

Packt
14 Jan 2016
27 min read
In this article by Ivan Nikolov, author of the book Scala Design Patterns, explains in the world of computer programming, there are multiple different ways to create a solution that does something. However, some might contemplate whether there is a correct way of achieving a specific task. The answer is yes; there is always a right way, but in software development, there are usually multiple ways to do achieve a task. Some factors exist, which guide the programmer to the right solution, and depending on them, people tend to get the expected result. These factors could define many things—the actual language being used, algorithm, type of executable produced, output format, and the code structure. The language is already chosen for us—Scala. There are, however, a number of ways to use Scala, and we will be focusing on them—the design patterns. In this article, we will explain what design patterns are and why they exist. We will go through the different types of design patterns that are out there. This aims to provide useful examples to aid you in the learning process, and being able to run them easily is key. Hence, some points on how to set up a development environment properly will be given here. The top-level topics we will go through are as follows: What is a design pattern and why do they exist? The main types of design patterns and their features Choosing the right design pattern Setting up a development environment in real life The last point doesn't have to do much with design patterns. However, it is always a good idea to build projects properly, as this makes it much easier to work in future. (For more resources related to this topic, see here.) Design patterns Before delving into the Scala design patterns, we have to explain what they actually are, why they exist, and why it is worth being familiar with them. Software is a broad subject, and there are innumerable examples of things people can do with it. At first glance, most of the things are completely different—games, websites, mobile phone applications, and specialized systems for different industries. There are, however, many similarities in how software is built. Many times, people have to deal with similar issues no matter the type of software they create. For example, computer games as well as websites might need to access a database. And throughout time, by experience, developers learn how structuring their code differs for various tasks that they perform. A formal definition for design patterns would help you understand where we are actually trying to get using good practices in building software. The formal definition for design patterns A design pattern is a reusable solution to a recurring problem in a software design. It is not a finished piece of code, but a template, which helps solving the particular problem or family of problems. Design patterns are best practices to which the software community has arrived over a period of time. They are supposed to help write efficient, readable, testable, and easily extendable code. In some cases, they can be a result of a programming language not being expressive enough to elegantly achieve a goal. This means that more feature-rich languages might not even need a design pattern, while others still do. Scala is one of those rich languages, and in some cases, it makes use of a design pattern that is obsolete or simpler. The lack or existence of a certain functionality within a programming language also makes it able to implement additional design patterns that others cannot. The opposite is also valid—it might not be able to implement things that others can. Scala and design patterns Scala is a hybrid language, which combines features from object-oriented and functional languages. This not only allows it to keep some of the well-known object-oriented design patterns relevant, but it also provides various other ways of exploiting its features to write code, which is clean, efficient, testable, and extendable all at the same time. The hybrid nature of the language also makes some of the traditional object-oriented design patterns obsolete, or possible, using other cleaner techniques. The need for design patterns and their benefits Everybody needs design patterns and should look into some before writing code. As we mentioned earlier, they help writing efficient, readable, extendable, and testable code. All these features are really important to companies in the industry. Even though in some cases it is preferred to quickly write a prototype and get it out, usually the case is that a piece of software is supposed to evolve. Maybe you will have experience of extending some badly written code, but regardless, it is a challenging task and takes really long, and sometimes it feels that rewriting it would be easier. Moreover, this makes introducing bugs into the system much more likely. Code readability is also something that should be appreciated. Of course, one could use a design pattern and still have their code hard to read, but generally, design patterns help. Big systems are usually worked on by many people, and everyone should be able to understand what exactly is going on. Also, people who join a team are able to integrate much easily and quickly if they work on some well-written piece of software. Testability is something that prevents developers from introducing bugs when writing or extending code. In some cases, code could be created so badly that it is not even testable. Design patterns are supposed to eliminate these problems as well. While efficiency is many times connected to algorithms, design patterns could also affect it. A simple example could be an object, which takes a long time to instantiate and instances are used in many places in an application, but could be made singleton instead. Design pattern categories The fact that software development is an extremely broad topic leads to a number of things that can be done with programming. Everything is different and this leads to various requirements about the qualities of programs. All these facts have caused many different design patterns to be invented. This is further contributed to by the existence of various programming languages with different features and levels of expressiveness. This article focuses on the design patterns from the point of view of Scala. As we already mentioned previously, Scala is a hybrid language. This leads us to a few famous design patterns that are not needed anymore—one example is the null object design pattern, which can simply be replaced by Scala's Option. Other design patterns become possible using different approaches—the decorator design pattern can be implemented using stackable traits. Finally, some new design patterns become available, which are applicable specifically to the Scala programming language —the cake design pattern, pimp my library, and so on. We will focus on all of these and make it clear where the richness of Scala helps us to make our code even cleaner and simpler. Even if there are many different design patterns, they can all be grouped in a few main groups: Creational Structural Behavioral Functional Scala-specific design patterns Some of the design patterns that are specific to Scala can be assigned to the previous groups. They can either be additional or replacements of the already existing ones. They are typical to Scala and take advantage of some advanced language features or simply features not available in other languages. The first three groups contain the famous Gang of Four design patterns. Every design pattern article covers them and so will we. The rest, even if they can be assigned to one of the first three groups, will be specific to Scala and functional programming languages. In the next few subsections, we will explain the main characteristics of these groups and briefly present the actual design patterns that fall under them. Creational design patterns The creational design patterns deal with object creation mechanisms. Their purpose is to create objects in a way that is suitable to the current situation, which could lead to unnecessary complexity and the need of extra knowledge if they were not there. The main ideas behind the creational design patterns are as follows: Knowledge encapsulation about the concrete classes Hiding details about the actual creation and how objects are combined We will be focusing on the following creational design patterns in this article: The abstract factory pattern The factory method pattern The lazy initialization pattern The singleton pattern The object pool pattern The builder pattern The prototype pattern The following few sections give a brief definition of what these patterns are. The abstract factory design pattern This is used to encapsulate a group of individual factories that have a common theme. When used, the developer creates a specific implementation of the abstract factory and uses its methods in the same way as in the factory design pattern to create objects. It can be thought of as another layer of abstraction that helps instantiating classes. The factory method design pattern This design pattern deals with creation of objects without explicitly specifying the actual class that the instance will have—it could be something that is decided at runtime based on many factors. Some of these factors can include operating systems, different data types, or input parameters. It gives developers the peace of mind of just calling a method rather than invoking a concrete constructor. The lazy initialization design pattern This pattern is an approach to delay the creation of an object or the evaluation of a value until the first time it is needed. It is much more simplified in Scala than it is in an object-oriented language as Java. The singleton design pattern This design pattern restricts the creation of a specific class just to one object. If more than one class in the application tries to use such an instance, then this same instance is returned for everyone. This is another design pattern that with the use of basic Scala features can be easily achieved. The object pool design pattern This pattern uses a pool of objects that are already instantiated and ready for use. Whenever someone requires an object from the pool, it is returned, and after the user is finished with it, it puts it back into the pool manually or automatically. A common use for pools are database connections, which generally are expensive to create; hence, they are created once and then served to the application on request. The builder design pattern The builder design pattern is extremely useful for objects with many possible constructor parameters, which would otherwise require developers to create many overrides for the different scenarios an object could be created in. This is different to the factory design pattern, which aims to enable polymorphism. Many of the modern libraries today employ this design pattern. As we will see later, Scala can achieve this pattern really easily. The prototype design pattern This design pattern allows object creation using a clone() method from an already created instance. It can be used in cases when a specific resource is expensive to create or when the abstract factory pattern is not desired. Structural design patterns Structural design patterns exist in order to help establish the relationships between different entities in order to form larger structures. They define how each component should be structured so that it has very flexible interconnecting modules that can work together in a larger system. The main features of structural design patterns include the following: The use of composition to combine the implementations of multiple objects Help build a large system made of various components by maintaining a high level of flexibility In this article, we will focus on the following structural design patterns: Adapter ge Composite FacDecorator Bridade Flyweight Proxy The next subsections will put some light on what these patterns are about. The adapter design pattern The adapter design pattern allows the interface of an existing class to be used from another interface. Imagine that there is a client who expects your class to expose a doWork() method. You might have the implementation ready in another class but the method is called differently and is incompatible. It might require extra parameters too. This could also be a library that the developer doesn't have access to for modifications. This is where the adapter can help by wrapping the functionality and exposing the required methods. The adapter is useful for integrating the existing components. In Scala, the adapter design pattern can be easily achieved using implicit classes. The decorator design pattern Decorators are a flexible alternative to sub classing. They allow developers to extend the functionality of an object without affecting other instances of the same class. This is achieved by wrapping an object of the extended class into one that extends the same class and overrides the methods whose functionality is supposed to be changed. Decorators in Scala can be built much easily using another design pattern called stackable traits. The bridge design pattern The purpose of the bridge design pattern is to decouple an abstraction from its implementation so that the two can vary independently. It is useful when the class and its functionality vary a lot. The bridge reminds us of the adapter pattern, but the difference is that the adapter pattern is used when something is already there and you cannot change it, while the bridge design pattern is used when things are being built. It helps us to avoid ending up with multiple concrete classes that will be exposed to the client. You will get a clearer understanding when we delve deeper in the topic, but for now, let's imagine that we want to have a FileReader class that supports multiple different platforms. The bridge will help us end up with FileReader, which will use a different implementation, depending on the platform. In Scala, we can use self-types in order to implement a bridge design pattern. The composite design pattern The composite is a partitioning design pattern that represents a group of objects that are to be treated as only one object. It allows developers to treat individual objects and compositions uniformly and to build complex hierarchies without complicating the source code. An example of composite could be a tree structure where a node can contain other nodes, and so on. The facade design pattern The purpose of the facade design pattern is to hide the complexity of a system and its implementation details by providing a simpler interface to use to the client. This also helps to make the code more readable and to reduce the dependencies of the outside code. It works as a wrapper around the system that is being simplified, and of course, it can be used in conjunction with some of the other design patterns we mentioned previously. The flyweight design pattern The flyweight design pattern provides an object that is used to minimize memory usage by sharing it throughout the application. This object should contain as much data as possible. A common example given is a word processor, where each character's graphical representation is shared with the other same characters. The local information then is only the position of the character, which is stored internally. The proxy design pattern The proxy design pattern allows developers to provide an interface to other objects by wrapping them. They can also provide additional functionality, for example, security or thread-safety. Proxies can be used together with the flyweight pattern, where the references to shared objects are wrapped inside proxy objects. Behavioral design patterns Behavioral design patterns increase communication flexibility between objects based on the specific ways they interact with each other. Here, creational patterns mostly describe a moment in time during creation, structural patterns describe a more or less static structure, and behavioral patterns describe a process or flow. They simplify this flow and make it more understandable. The main features of behavioral design patterns are as follows: What is being described is a process or flow The flows are simplified and made understandable They accomplish tasks that would be difficult or impossible to achieve with objects In this article, we will focus our attention on the following behavioral design patterns: Value object Null object Strategy Command Chain of responsibility Interpreter Iterator Mediator Memento Observer State Template method Visitor The following subsections will give brief definitions of the aforementioned behavioral design patterns. The value object design pattern Value objects are immutable and their equality is based not on their identity, but on their fields being equal. They can be used as data transfer objects, and they can represent dates, colors, money amounts, numbers, and so on. Their immutability makes them really useful in multithreaded programming. The Scala programming language promotes immutability, and value objects are something that naturally occur there. The null object design pattern Null objects represent the absence of a value and they define a neutral behavior. This approach removes the need to check for null references and makes the code much more concise. Scala adds the concepts of optional values, which can replace this pattern completely. The strategy design pattern The strategy design pattern allows algorithms to be selected at runtime. It defines a family of interchangeable encapsulated algorithms and exposes a common interface to the client. Which algorithm is chosen could depend on various factors that are determined while the application runs. In Scala, we can simply pass a function as a parameter to a method, and depending on the function, a different action will be performed. The command design pattern This design pattern represents an object that is used to store information about an action that needs to be triggered at a later time. The information includes the following: The method name The owner of the method Parameter values The client then decides which commands to be executed and when by the invoker. This design pattern can easily be implemented in Scala using the by-name parameters feature of the language. The chain of responsibility design pattern The chain of responsibility is a design pattern where the sender of a request is decoupled from its receiver. This way, it makes it possible for multiple objects to handle the request and to keep logic nicely separate. The receivers form a chain where they pass the request, and if possible, they process it, and if not, they pass it to the next receiver. There are variations where a handler might dispatch the request to multiple other handlers at the same time. This somehow reminds of function composition, which in Scala can be achieved using the stackable traits design pattern. The interpreter design pattern The interpreter design pattern is based on the possibility to characterize a well-known domain with a language with strict grammar. It defines classes for each grammar rule in order to interpret sentences in the given language. These classes are likely to represent hierarchies as grammar is usually hierarchical as well. Interpreters can be used in different parsers, for example, SQL or other languages. The iterator design pattern The iterator design pattern is when an iterator is used to traverse a container and access its elements. It helps to decouple containers from the algorithms performed on them. What an iterator should provide is sequential access to the elements of an aggregate object without exposing the internal representation of the iterated collection. The mediator design pattern This pattern encapsulates the communication between different classes in an application. Instead of interacting directly with each other, objects communicate through the mediator, which reduces the dependencies between them, lowers the coupling, and makes the overall application easier to read and maintain. The memento design pattern This pattern provides the ability to rollback an object to its previous state. It is implemented with three objects: originator, caretaker, and memento. The originator is the object with the internal state; the caretaker will modify the originator, and a memento is an object that contains the state that the originator returns. The originator knows how to handle a memento in order to restore its previous state. The observer design pattern This pattern allows the creation of publish/subscribe systems. There is a special object called subject that automatically notifies all the observers when there are any changes in the state. This design pattern is popular in various GUI toolkits and generally where event handling is needed. It is also related to reactive programming, which is enabled by libraries such as Akka. The state design pattern This pattern is similar to the strategy design pattern, and it uses a state object to encapsulate different behavior for the same object. It improves the code's readability and maintainability by avoiding the use of large conditional statements. The template method design pattern This pattern defines the skeleton of an algorithm in a method and then passes some of the actual steps to the subclasses. It allows developers to alter some of the steps of an algorithm without having to modify its structure. An example of this could be a method in an abstract class that calls other abstract methods, which will be defined in the children. The visitor design pattern The visitor design pattern represents an operation to be performed on the elements of an object structure. It allows developers to define a new operation without changing the original classes. Scala can minimize the verbosity of this pattern compared to the pure object-oriented way of implementing it by passing functions to methods. Functional design patterns We will be looking into all of the preceding design patterns from the point of view of Scala. This means that they will look different than in other languages, but they've still been designed not specifically for functional programming. Functional programming is much more expressive than object-oriented programming. It has its own design patterns that help making the life of a programmer easier. We will focus on: Monoids Monads Functors After we've looked at some Scala functional programming concepts, and we've been through these, we will mention some interesting design patterns from the Scala world. A brief explanation of the preceding listed patterns will follow in the next few subsections. Monoids Monoid is a concept that comes from mathematics. For now, it will be enough to remember that a monoid is an algebraic structure with a single associative binary operation and an identity element. Here are the keywords you should remember: The associative binary operation. This means (a+b)+c = a+(b+c) The identity element. This means a+i = i+a = a. Here, the identity is i What is important about monoids is that they give us the possibility to work with many different types of values in a common way. They allow us to convert pairwise operations to work with sequences; the associativity gives us the possibility for parallelization, and the identity element allows us to know what to do with empty lists. Monoids are great to easily describe and implement aggregations. Monads In functional programming, monads are structures that represent computations as sequences of steps. Monads are useful for building pipelines, adding operations with side effects cleanly to a language where everything is immutable, and implementing compositions. This definition might sound vague and unclear, but explaining monads in a few sentences seems to be something hard to achieve. We will try to show why monads are useful and what they can help with as long as developers understand them well. Functors Functors come from a category theory, and as for monads, it takes time to explain them properly. For now, you could remember that functors are things that can allow us to lift a function of the type A => B to a function of the type F[A] => F[B]. Scala-specific design patterns The design patterns in this group could be assigned to some of the previous groups. However, they are specific to Scala and exploit some of the language features that we will focus on, and we've decided to place them in their own group. We will focus our attention on the following ones: The lens design pattern The cake design pattern Pimp my library Stackable traits The Type class design pattern Lazy evaluation Partial functions Implicit injection Duck typing Memoization The next subsections will give you some brief information about these patterns. The lens design pattern The Scala programming language promotes immutability. Having objects immutable makes it harder to make mistakes. However, sometimes mutability is required and the lens design pattern helps us achieve this nicely. The cake design pattern The cake design pattern is the Scala way to implement dependency injection. It is something that is used quite a lot in real-life applications, and there are numerous libraries that help developers achieve it. Scala has a way of doing this using language features, and this is what the cake design pattern is all about. Pimp my library Many times, engineers need to work with libraries, which are made to be as generic as possible. Sometimes, we need to do something more specific to our use case, though. The pimp my library design pattern provides a way to write extension methods for libraries, which we cannot modify. We can also use it for our own libraries as well. This design pattern also helps to achieve a better code readability. Stackable traits The stackable traits is the Scala way to implement the decorator design pattern. It can also be used to compose functions, and it's based on a few advanced Scala features. The type class design pattern This pattern allows us to write generic code by defining a behavior that must be supported by all members of a specific type class. For example, all numbers must support the addition and subtraction operations. Lazy evaluation Many times, engineers have to deal with operations that are slow and/or expensive. Sometimes, the result of these operations might not even be needed. Lazy evaluation is a technique that postpones the operation execution until it is actually needed. It could be used for application optimization. Partial functions Mathematics and functional programming are really close together. As a consequence, there are functions that exist that are only defined for a subset of all the possible input values they can get. A popular example is the square root function, which only works for non-negative numbers. In Scala, such functions can be used to efficiently perform multiple operations at the same time or to compose functions. Implicit injection Implicit injection is based on the implicit functionality of the Scala programming language. It automatically injects objects whenever they are needed, as long as they exist in a specific scope. It can be used for many things, including dependency injection. Duck typing This is a feature that is available in Scala and is similar to what some dynamic languages provide. It allows developers to write code, which requires the callers to have some specific methods (but not implement an interface). When someone uses a method with a duck type, it is actually checked during compile time whether the parameters are valid. Memoization This design pattern helps with optimization by remembering function results, based on the inputs. This means that as long as the function is stable and will return the same result when the same parameters are passed, one can remember its results and simply return them for every consecutive identical call. How to choose a design pattern As we already saw, there is a huge number of design patterns. In many cases, they are suitable to be used in combinations as well. Unfortunately, there is no definite answer about how to choose the concept of designing our code. There are many factors that could affect the final decision, and you should ask yourselves the following questions: Is this piece of code going to be fairly static or will it change in the future? Do we have to dynamically decide what algorithms to use? Is our code going to be used by others? Do we have an agreed interface? What libraries are we planning to use if any? Are there any special performance requirements or limitations? This is by no means an exhaustive list of questions. There are a lot of amount of factors that could dictate our decision in how we build our systems. It is, however, really important to have a clear specification, and if something seems missing, it should always be checked first. They should help you ask the right questions and take the right decision before going on and writing code. Setting up the development environment This will aim to give real code examples for you to run and experiment with. This means that it is important to be able to easily run any examples we have provided here and not to fight with the code. We will do our best to have the code tested and properly packaged, but you should also make sure that you have everything needed for the examples. Installing Scala Of course, you will need the Scala programming language. It evolves quickly, and the newest version could be found at http://www.scala-lang.org/download/. There are a few tips about how to install the language in your operating system at http://www.scala-lang.org/download/install.html. Tips about installing Scala You can always download multiple versions of Scala and experiment with them. I use Linux and my tips will be applicable to Mac OS users, too. Windows users can also do a similar setup. Here are the steps: Install Scala under /opt/scala-{version}/. Then, create a symlink using the following command: sudo ln -s /opt/scala-{version} scala-current. Finally, add the path to the Scala bin folder to my .bashrc (or equivalent) file using the following lines: export SCALA_HOME=/opt/scala-current and export PATH=$PATH:$SCALA_HOME/bin. This allows us to quickly change versions of Scala by just redefining the symlink. Another way to experiment with any Scala version is to install SBT (you can find more information on this). Then, simply run sbt in your console, type ++ 2.11.7 (or any version you want), and then issue the console command. Now you can test Scala features easily. Using SBT or Maven or any other build tool will automatically download Scala for you. If you don't need to experiment with the console, you can skip the preceding steps. Using the preceding tips, we can use the Scala interpreter by just typing scala in the terminal or follow the sbt installation process and experiment with different language features in the REPL. Scala IDEs There are multiple IDEs out there that support development in Scala. There is absolutely no preference about which one to use to work with the code. Some of the most popular ones are as follows: IntelliJ Eclipse NetBeans They contain plugins to work with Scala, and downloading and using them should be straightforward. Summary By now, we have a fair idea about what a design pattern means and how it can affect the way we write our code. We've iterated the most famous design patterns out there, and we have outlined the main differences between them. We saw that in many cases, we could use Scala's features in order to make a pattern obsolete, simpler, or different to implement compared to the classical case for pure object-oriented languages. Knowing what to look for when picking a design pattern is important, and you should already know what specific details to watch out for and how important specifications are. Resources for Article: Further resources on this subject: Content-based recommendation [article] Getting Started with Apache Spark DataFrames [article] RESTServices with Finagle and Finch [article]
Read more
  • 0
  • 0
  • 22685

article-image-javascript-execution-selenium
Packt
04 Sep 2015
23 min read
Save for later

JavaScript Execution with Selenium

Packt
04 Sep 2015
23 min read
In this article, by Mark Collin, the author of the book, Mastering Selenium WebDriver, we will look at how we can directly execute JavaScript snippets in Selenium. We will explore the sort of things that you can do and how they can help you work around some of the limitations that you will come across while writing your scripts. We will also have a look at some examples of things that you should avoid doing. (For more resources related to this topic, see here.) Introducing the JavaScript executor Selenium has a mature API that caters to the majority of automation tasks that you may want to throw at it. That being said, you will occasionally come across problems that the API doesn't really seem to support. This was very much on the development team's mind when Selenium was written. So, they provided a way for you to easily inject and execute arbitrary blocks of JavaScript. Let's have a look at a basic example of using a JavaScript executor in Selenium: JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("console.log('I logged something to the Javascript console');"); Note that the first thing we do is cast a WebDriver object into a JavascriptExecutor object. The JavascriptExecutor interface is implemented through the RemoteWebDriver class. So, it's not a part of the core set of API functions. Since we normally pass around a WebDriver object, the executeScript functions will not be available unless we perform this cast. If you are directly using an instance of RemoteWebDriver or something that extends it (most driver implementations now do this), you will have direct access to the .executeScript() function. Here's an example: FirefoxDriver driver = new FirefoxDriver(new FirefoxProfile()); driver.executeScript("console.log('I logged something to the Javascript console');"); The second line (in both the preceding examples) is just telling Selenium to execute an arbitrary piece of JavaScript. In this case, we are just going to print something to the JavaScript console in the browser. We can also get the .executeScript() function to return things to us. For example, if we tweak the script of JavaScript in the first example, we can get Selenium to tell us whether it managed to write to the JavaScript console or not, as follows: JavascriptExecutor js = (JavascriptExecutor) driver; Object response = js.executeScript("return console.log('I logged something to the Javascript console');"); In the preceding example, we will get a result of true coming back from the JavaScript executor. Why does our JavaScript start with return? Well, the JavaScript executed by Selenium is executed as a body of an anonymous function. This means that if we did not add a return statement to the start of our JavaScript snippet, we would actually be running this JavaScript function using Selenium: var anonymous = function () { console.log('I logged something to the Javascript console'); }; This function does log to the console, but it does not return anything. So, we can't access the result of the JavaScript snippet. If we prefix it with a return, it will execute this anonymous function: var anonymous = function () { return console.log('I logged something to the Javascript console'); }; This does return something for us to work with. In this case, it will be the result of our attempt to write some text to the console. If we succeeded in writing some text to the console, we will get back a true value. If we failed, we will get back a false value. Note that in our example, we saved the response as an object—not a string or a Boolean. This is because the JavaScript executor can return lots of different types of objects. What we get as a response can be one of the following: If the result is null or there is no return value, a null will be returned If the result is an HTML element, a WebElement will be returned If the result is a decimal, a double will be returned If the result is a nondecimal number, a long will be returned If the result is a Boolean, a Boolean will be returned If the result is an array, a List object with each object that it contains, along with all of these rules, will be returned (nested lists are supported) For all other cases, a string will be returned It is an impressive list, and it makes you realize just how powerful this method is. There is more as well. You can also pass arguments into the .executeScript() function. The arguments that you pass in can be any one of the following: Number Boolean String WebElement List They are then put into a magic variable called arguments, which can be accessed by the JavaScript. Let's extend our example a little bit to pass in some arguments, as follows: String animal = "Lion"; int seen = 5; JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("console.log('I have seen a ' + arguments[0] + ' ' + arguments[1] + ' times(s)');", animal, seen); This time, you will see that we managed to print the following text into the console: I have seen a Lion 5 times(s) As you can see, there is a huge amount of flexibility with the JavaScript executor. You can write some complex bits of JavaScript code and pass in lots of different types of arguments from your Java code. Think of all the things that you could do! Let's not get carried away We now know the basics of how one can execute JavaScript snippets in Selenium. This is where some people can start to get a bit carried away. If you go through the mailing list of the users of Selenium, you will see many instances of people asking why they can't click on an element. Most of the time, this is due to the element that they are trying to interact with not being visible, which is blocking a click action. The real solution to this problem is to perform an action (the same one that they would perform if they were manually using the website) to make the element visible so that they can interact with it. However, there is a shortcut offered by many, which is a very bad practice. You can use a JavaScript executor to trigger a click event on this element. Doing this will probably make your test pass. So why is it a bad solution? The Selenium development team has spent quite a lot of time writing code that works out if a user can interact with an element. It's pretty reliable. So, if Selenium says that you cannot currently interact with an element, it's highly unlikely that it's wrong. When figuring out whether you can interact with an element, lots of things are taken into account, including the z-index of an element. For example, you may have a transparent element that is covering the element that you want to click on and blocking the click action so that you can't reach it. Visually, it will be visible to you, but Selenium will correctly see it as not visible. If you now invoke a JavaScript executor to trigger a click event on this element, your test will pass, but users will not be able to interact with it when they try to manually use your website. However, what if Selenium got it wrong and I can interact with the element that I want to click manually? Well, that's great, but there are two things that you need to think about. First of all, does it work in all browsers? If Selenium thinks that it is something that you cannot interact with, it's probably for a good reason. Is the markup, or the CSS, overly complicated? Can it be simplified? Secondly, if you invoke a JavaScript executor, you will never know whether the element that you want to interact with really does get blocked at some point in the future. Your test may as well keep passing when your application is broken. Tests that can't fail when something goes wrong are worse than no test at all! If you think of Selenium as a toolbox, a JavaScript executor is a very powerful tool that is present in it. However, it really should be seen as a last resort when all other avenues have failed you. Too many people use it as a solution to any slightly sticky problem that they come across. If you are writing JavaScript code that attempts to mirror existing Selenium functions but are removing the restrictions, you are probably doing it wrong! Your code is unlikely to be better. The Selenium development team have been doing this for a long time with a lot of input from a lot of people, many of them being experts in their field. If you are thinking of writing methods to find elements on a page, don't! Use the .findElement() method provided by Selenium. Occasionally, you may find a bug in Selenium that prevents you from interacting with an element in the way you would expect to. Many people first respond by reaching for the JavascriptExecutor to code around the problem in Selenium. Hang on for just one moment though. Have you upgraded to the latest version of Selenium to see if that fixes your problem? Alternatively, did you just upgrade to the latest version of Selenium when you didn't need to? Using a slightly older version of Selenium that works correctly is perfectly acceptable. Don't feel forced to upgrade for no reason, especially if it means that you have to write your own hacks around problems that didn't exist before. The correct thing to do is to use a stable version of Selenium that works for you. You can always raise bugs for functionality that doesn't work, or even code a fix and submit a pull request. Don't give yourself the additional work of writing a workaround that's probably not the ideal solution, unless you need to. So what should we do with it? Let's have a look at some examples of the things that we can do with the JavaScript executor that aren't really possible using the base Selenium API. First of all, we will start off by getting the element text. Wait a minute, element text? But, that’s easy! You can use the existing Selenium API with the following code: WebElement myElement = driver.findElement(By.id("foo")); String elementText = myElement.getText(); So why would we want to use a JavaScript executor to find the text of an element? Getting text is easy using the Selenium API, but only under certain conditions. The element that you are collecting the text from needs to be displayed. If Selenium thinks that the element from which you are collecting the text is not displayed, it will return an empty string. If you want to collect some text from a hidden element, you are out of luck. You will need to implement a way to do it with a JavaScript executor. Why would you want to do this? Well, maybe you have a responsive website that shows different elements based on different resolutions. You may want to check whether these two different elements are displaying the same text to the user. To do this, you will need to get the text of the visible and invisible elements so that you can compare them. Let's create a method to collect some hidden text for us: private String getHiddenText(WebElement element) { JavascriptExecutor js = (JavascriptExecutor) ((RemoteWebElement) element).getWrappedDriver(); return (String) js.executeScript("return arguments[0].text", element); } There is some cleverness in this method. First of all, we took the element that we wanted to interact with and then extracted the driver object associated with it. We did this by casting the WebElement into a RemoteWebElement, which allowed us to use the getWrappedDriver() method. This removes the need to pass a driver object around the place all the time (this is something that happens a lot in some code bases). We then took the driver object and cast it into a JavascriptExecutor so that we would have the ability to invoke the executeScript() method. Next, we executed the JavaScript snippet and passed in the original element as an argument. Finally, we took the response of the executeScript() call and cast it into a string that we can return as a result of the method. Generally, getting text is a code smell. Your tests should not rely on specific text being displayed on a website because content always changes. Maintaining tests that check the content of a site is a lot of work, and it makes your functional tests brittle. The best thing to do is test the mechanism that injects the content into the website. If you use a CMS that injects text into a specific template key, you can test whether each element has the correct template key associated with it. I want to see a more complex example! So you want to see something more complicated. The Advanced User Interactions API cannot deal with HTML5 drag and drop. So, what happens if we come across an HTML5 drag-and-drop implementation that we want to automate? Well, we can use the JavascriptExecutor. Let's have a look at the markup for the HTML5 drag-and-drop page: <!DOCTYPE html> <html lang="en"> <head> <meta charset=utf-8> <title>Drag and drop</title> <style type="text/css"> li { list-style: none; } li a { text-decoration: none; color: #000; margin: 10px; width: 150px; border-width: 2px; border-color: black; border-style: groove; background: #eee; padding: 10px; display: block; } *[draggable=true] { cursor: move; } ul { margin-left: 200px; min-height: 300px; } #obliterate { background-color: green; height: 250px; width: 166px; float: left; border: 5px solid #000; position: relative; margin-top: 0; } #obliterate.over { background-color: red; } </style> </head> <body> <header> <h1>Drag and drop</h1> </header> <article> <p>Drag items over to the green square to remove them</p> <div id="obliterate"></div> <ul> <li><a id="one" href="#" draggable="true">one</a></li> <li><a id="two" href="#" draggable="true">two</a></li> <li><a id="three" href="#" draggable="true">three</a></li> <li><a id="four" href="#" draggable="true">four</a></li> <li><a id="five" href="#" draggable="true">five</a></li> </ul> </article> </body> <script> var draggableElements = document.querySelectorAll('li > a'), obliterator = document.getElementById('obliterate'); for (var i = 0; i < draggableElements.length; i++) { element = draggableElements[i]; element.addEventListener('dragstart', function (event) { event.dataTransfer.effectAllowed = 'copy'; event.dataTransfer.setData('being-dragged', this.id); }); } obliterator.addEventListener('dragover', function (event) { if (event.preventDefault) event.preventDefault(); obliterator.className = 'over'; event.dataTransfer.dropEffect = 'copy'; return false; }); obliterator.addEventListener('dragleave', function () { obliterator.className = ''; return false; }); obliterator.addEventListener('drop', function (event) { var elementToDelete = document.getElementById( event.dataTransfer.getData('being-dragged')); elementToDelete.parentNode.removeChild(elementToDelete); obliterator.className = ''; return false; }); </script> </html> Note that you need a browser that supports HTML5/CSS3 for this page to work. The latest versions of Google Chrome, Opera Blink, Safari, and Firefox will work. You may have issues with Internet Explorer (depending on the version that you are using). For an up-to-date list of HTML5/CSS3 support, have a look at http://caniuse.com. If you try to use the Advanced User Interactions API to automate this page, you will find that it just doesn't work. It looks like it's time to reach for JavascriptExecutor. First of all, we need to write some JavaScript that can simulate the events that we need to trigger to perform the drag-and-drop action. To do this, we are going to create three JavaScript functions. The first function is going to create a JavaScript event: function createEvent(typeOfEvent) { var event = document.createEvent("CustomEvent"); event.initCustomEvent(typeOfEvent, true, true, null); event.dataTransfer = { data: {}, setData: function (key, value) { this.data[key] = value; }, getData: function (key) { return this.data[key]; } }; return event; } We then need to write a function that will fire events that we have created. This also allows you to pass in the dataTransfer value set on an element. We need this to keep track of the element that we are dragging: function dispatchEvent(element, event, transferData) { if (transferData !== undefined) { event.dataTransfer = transferData; } if (element.dispatchEvent) { element.dispatchEvent(event); } else if (element.fireEvent) { element.fireEvent("on" + event.type, event); } } Finally, we need something that will use these two functions to simulate the drag-and-drop action: function simulateHTML5DragAndDrop(element, target) { var dragStartEvent = createEvent('dragstart'); dispatchEvent(element, dragStartEvent); var dropEvent = createEvent('drop'); dispatchEvent(target, dropEvent, dragStartEvent.dataTransfer); var dragEndEvent = createEvent('dragend'); dispatchEvent(element, dragEndEvent, dropEvent.dataTransfer); } Note that the simulateHTML5DragAndDrop function needs us to pass in two elements—the element that we want to drag, and the element that we want to drag it to. It's always a good idea to try out your JavaScript in a browser first. You can copy the preceding functions into the JavaScript console in a modern browser and then try using them to make sure that they work as expected. If things go wrong in your Selenium test, you then know that it is most likely an error invoking it via the JavascriptExecutor rather than a bad piece of JavaScript. We now need to take these scripts and put them into a JavascriptExecutor along with something that will call the simulateHTML5DragAndDrop function: private void simulateDragAndDrop(WebElement elementToDrag, WebElement target) throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("function createEvent(typeOfEvent) {n" + "var event = document.createEvent("CustomEvent");n" + "event.initCustomEvent(typeOfEvent, true, true, null);n" + " event.dataTransfer = {n" + " data: {},n" + " setData: function (key, value) {n" + " this.data[key] = value;n" + " },n" + " getData: function (key) {n" + " return this.data[key];n" + " }n" + " };n" + " return event;n" + "}n" + "n" + "function dispatchEvent(element, event, transferData) {n" + " if (transferData !== undefined) {n" + " event.dataTransfer = transferData;n" + " }n" + " if (element.dispatchEvent) {n" + " element.dispatchEvent(event);n" + " } else if (element.fireEvent) {n" + " element.fireEvent("on" + event.type, event);n" + " }n" + "}n" + "n" + "function simulateHTML5DragAndDrop(element, target) {n" + " var dragStartEvent = createEvent('dragstart');n" + " dispatchEvent(element, dragStartEvent);n" + " var dropEvent = createEvent('drop');n" + " dispatchEvent(target, dropEvent, dragStartEvent.dataTransfer);n" + " var dragEndEvent = createEvent('dragend'); n" + " dispatchEvent(element, dragEndEvent, dropEvent.dataTransfer);n" + "}n" + "n" + "var elementToDrag = arguments[0];n" + "var target = arguments[1];n" + "simulateHTML5DragAndDrop(elementToDrag, target);", elementToDrag, target); } This method is really just a wrapper around the JavaScript code. We take a driver object and cast it into a JavascriptExecutor. We then pass the JavaScript code into the executor as a string. We have made a couple of additions to the JavaScript functions that we previously wrote. Firstly, we set a couple of variables (mainly for code clarity; they can quite easily be inlined) that take the WebElements that we have passed in as arguments. Finally, we invoke the simulateHTML5DragAndDrop function using these elements. The final piece of the puzzle is to write a test that utilizes the simulateDragAndDrop method, as follows: @Test public void dragAndDropHTML5() throws Exception { WebDriver driver = getDriver(); driver.get("http://ch6.masteringselenium.com/ dragAndDrop.html"); final By destroyableBoxes = By.cssSelector("ul > li > a"); WebElement obliterator = driver.findElement(By.id("obliterate")); WebElement firstBox = driver.findElement(By.id("one")); WebElement secondBox = driver.findElement(By.id("two")); assertThat(driver.findElements(destroyableBoxes).size(), is(equalTo(5))); simulateDragAndDrop(firstBox, obliterator); assertThat(driver.findElements(destroyableBoxes). size(), is(equalTo(4))); simulateDragAndDrop(secondBox, obliterator); assertThat(driver.findElements(destroyableBoxes). size(), is(equalTo(3))); } This test finds a couple of boxes and destroys them one by one using the simulated drag and drop. As you can see, the JavascriptExcutor is extremely powerful. Can I use JavaScript libraries? The logical progression is, of course, to write your own JavaScript libraries that you can import instead of sending everything over as a string. Alternatively, maybe you would just like to import an existing library. Let's write some code that allows you to import a JavaScript library of your choice. It's not a particularly complex JavaScript. All that we are going to do is create a new <script> element in a page and then load our library into it, as follows: public void injectScript(String scriptURL) throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("function injectScript(url) {n" + " var script = document.createElement ('script');n" + " script.src = url;n" + " var head = document.getElementsByTagName( 'head')[0];n" + " head.appendChild(script);n" + "}n" + "n" + "var scriptURL = arguments[0];n" + "injectScript(scriptURL);" , scriptURL); } We have again set arguments[0] to a variable before injecting it for clarity, but you can inline this part if you want to. All that remains now is to inject this into a page and check whether it works. Let's write a test! We are going to use this function to inject jQuery into the Google website. The first thing that we need to do is write a method that can tell us whether jQuery has been loaded or not, as follows: public Boolean isjQueryLoaded() throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; return (Boolean) js.executeScript("return typeof jQuery != 'undefined';"); } Now, we need to put all of this together in a test, as follows: @Test public void injectjQueryIntoGoogle() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.get("http://www.google.com"); assertThat(isjQueryLoaded(), is(equalTo(false))); injectScript("https://code.jquery.com/jquery-latest.min.js"); assertThat(isjQueryLoaded(), is(equalTo(true))); } It's a very simple test. We loaded the Google website. Then, we checked whether jQuery existed. Once we were sure that it didn't exist, we injected jQuery into the page. Finally, we again checked whether jQuery existed. We have used jQuery in our example, but you don't have to use jQuery. You can inject any script that you desire. Should I inject JavaScript libraries? It's very easy to inject JavaScript into a page, but stop and think before you do it. Adding lots of different JavaScript libraries may affect the existing functionality of the site. You may have functions in your JavaScript that overwrite existing functions that are already on the page and break the core functionality. If you are testing a site, it may make all of your tests invalid. Failures may arise because there is a clash between the scripts that you inject and the existing scripts used on the site. The flip side is also true—injecting a script may make the functionality that is broken, work. If you are going to inject scripts into an existing site, be sure that you know what the consequences are. If you are going to regularly inject a script, it may be a good idea to add some assertions to ensure that the functions that you are injecting do not already exist before you inject the script. This way, your tests will fail if the developers add a JavaScript function with the same name at some point in the future without your knowledge. What about asynchronous scripts? Everything that we have looked at so far has been a synchronous piece of JavaScript. However, what if we wanted to perform some asynchronous JavaScript calls as a part of our test? Well, we can do this. The JavascriptExecutor also has a method called executeAsyncScript(). This will allow you to run some JavaScript that does not respond instantly. Let's have a look at some examples. First of all, we are going to write a very simple bit of JavaScript that will wait for 25 seconds before triggering a callback, as follows: @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout(60, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[ arguments.length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } Note that we defined a JavaScript variable named callback, which uses a script argument that we have not set. For asynchronous scripts, Selenium needs to have a callback defined, which is used to detect when the JavaScript that you are executing has finished. This callback object is automatically added to the end of your arguments array. This is what we have defined as the callback variable. If we now run the script, it will load our browser and then sit there for 25 seconds as it waits for the JavaScript snippet to complete and call the callback. It will then load the Google website and finish. We have also set a script timeout on the driver object that will wait for up to 60 seconds for our piece of JavaScript to execute. Let's see what happens if our script takes longer to execute than the script timeout: @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout(5, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[ arguments.length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } This time, when we run our test, it waits for 5 seconds and then throws a TimoutException. It is important to set a script timeout on the driver object when running asynchronous scripts, to give them enough time to execute. What do you think will happen if we execute this as a normal script? @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout( 5, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("var callback = arguments[arguments. length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } You may have been expecting an error, but that's not what you got. The script got executed as normal because Selenium was not waiting for a callback; it didn't wait for it to complete. Since Selenium did not wait for the script to complete, it didn't hit the script timeout. Hence, no error was thrown. Wait a minute. What about the callback definition? There was no argument that was used to set the callback variable. Why didn't it blow up? Well, JavaScript isn't as strict as Java. What it has done is try and work out what arguments[arguments.length - 1] would resolve and realized that it is not defined. Since it is not defined, it has set the callback variable to null. Our test then completed before setTimeout() had a chance to complete its call. So, you won't see any console errors. As you can see, it's very easy to make a small error that stops things from working when working with asynchronous JavaScript. It's also very hard to find these errors because there can be very little user feedback. Always take extra care when using the JavascriptExecutor to execute asynchronous bits of JavaScript. Summary In this article, we: Learned how to use a JavaScript executor to execute JavaScript snippets in the browser through Selenium Learned about passing arguments into a JavaScript executor and the sort of arguments that are supported Learned what the possible return types are for a JavaScript executor Gained a good understanding of when we shouldn't use a JavaScript executor Worked through a series of examples that showed ways in which we can use a JavaScript executor to enhance our tests Resources for Article: Further resources on this subject: JavaScript tech page Cross-browser Tests using Selenium WebDriver Selenium Testing Tools Learning Selenium Testing Tools with Python
Read more
  • 0
  • 0
  • 22339
Modal Close icon
Modal Close icon