Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-enabling-spring-faces-support
Packt
28 Oct 2009
9 min read
Save for later

Enabling Spring Faces support

Packt
28 Oct 2009
9 min read
The main focus of the Spring Web Flow Framework is to deliver the infrastructure to describe the page flow of a web application. The flow itself is a very important element of a web application, because it describes its structure, particularly the structure of the implemented business use cases. But besides the flow which is only in the background, the user of your application is interested in the Graphical User Interface (GUI). Therefore, we need a solution of how to provide a rich user interface to the users. One framework which offers components is JavaServer Faces (JSF). With the release of Spring Web Flow 2, an integration module to connect these two technologies, called Spring Faces has been introduced. This article is no introduction to the JavaServer Faces technology. It is only a description about the integration of Spring Web Flow 2 with JSF. If you have never previously worked with JSF, please refer to the JSF reference to gain knowledge about the essential concepts of JavaServer Faces. JavaServer Faces (JSF)—a brief introductionThe JavaServer Faces (JSF) technology is a web application framework with the goal to make the development of user interfaces for a web application (based on Java EE) easier. JSF uses a component-based approach with an own lifecycle model, instead of a request-driven approach used by traditional MVC web frameworks. The version 1.0 of JSF is specified inside JSR (Java Specification Request) 127 (http://jcp.org/en/jsr/detail?id=127). To use the Spring Faces module, you have to add some configuration to your application. The diagram below depicts the single configuration blocks. These blocks are described in this article. The first step in the configuration is to configure the JSF framework itself. That is done in the deployment descriptor of the web application—web.xml. The servlet has to be loaded at the startup of the application. This is done with the <load-on-startup>1</load-on-startup> element. <!-- Initialization of the JSF implementation. The Servlet is not used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> For the work with the JavaServer Faces, there are two important classes. These are the javax.faces.webapp.FacesServlet and the javax.faces.context.FacesContext classes.You can think of FacesServlet as the core base of each JSF application. Sometimes that servlet is called an infrastructure servlet. It is important to mention that each JSF application in one web container has its own instance of the FacesServlet class. This means that an infrastructure servlet cannot be shared between many web applications on the same JEE web container.FacesContext is the data container which encapsulates all information that is necessary around the current request.For the usage of Spring Faces, it is important to know that FacesServlet is only used to instantiate the framework. A further usage inside Spring Faces is not done. To be able to use the components from Spring Faces library, it's required to use Facelets instead of JSP. Therefore, we have to configure that mechanism. If you are interested in reading more about the Facelets technology, visit the Facelets homepage from java.net with the following URL: https://facelets.dev.java.net. A good introduction inside the Facelets technology is the http://www.ibm.com/developerworks/java/library/j-facelets/ article, too. The configuration process is done inside the deployment descriptor of your web application—web.xml. The following sample shows the configuration inside the mentioned file. <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value></context-param> As you can see in the above code, the configuration parameter is done with a context parameter. The name of the parameter is javax.faces.DEFAULT_SUFFIX. The value for that context parameter is .xhtml. Inside the Facelets technology To present the separate views inside a JSF context, you need a specific view handler technology. One of those technologies is the well-known JavaServer Pages (JSP) technology. Facelets are an alternative for the JSP inside the JSF context. Instead, to define the views in JSP syntax, you will use XML. The pages are created using XHTML. The Facelets technology offers the following features: A template mechanism, similar to the mechanism which is known from the Tiles framework The composition of components based on other components Custom logic tags Expression functions With the Facelets technology, it's possible to use HTML for your pages. Therefore, it's easy to create the pages and view them directly in a browser, because you don't need an application server between the processes of designing a page The possibility to create libraries of your components The following sample shows a sample XHTML page which uses the component aliasing mechanism of the Facelets technology. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <body> <form jsfc="h:form"> <span jsfc="h:outputText" value="Welcome to our page: #{user.name}" disabled="#{empty user}" /> <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> <input type="submit" jsfc="h:commandButton" value="OK" action="#{bean.doIt}" /> </form> </body></html> The sample code snippet above uses the mentioned expression language (for example, the #{user.name} expression accesses the name property from the user instance) of the JSF technology to access the data. What is component aliasingOne of the mentioned features of the Facelets technology is that it is possible to view a page directly in a browser without that the page is running inside a JEE container environment. This is possible through the component aliasing feature. With this feature, you can use normal HTML elements, for example an input element. Additionally, you can refer to the component which is used behind the scenes with the jsfc attribute. An example for that is <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> . If you open this inside a browser, the normal input element is used. If you use it inside your application, the h:inputText element of the component library is used     The ResourceServlet One main part of the JSF framework are the components for the GUI. These components often consist of many files besides the class files. If you use many of these components, the problem of handling these files arises. To solve this problem, the files such as JavaScript and CSS (Cascading Style Sheets) can be delivered inside the JAR archive of the component. If you deliver the file inside the JAR file, you can organize the components in one file and therefore it is easier for the deployment and maintenance of your component library. Regardless of the framework you use, the result is HTML. The resources inside the HTML pages are required as URLs. For that, we need a way to access these resources inside the archive with the HTTP protocol. To solve that problem, there is a servlet with the name ResourceServlet (package org.springframework.js.resource). The servlet can deliver the following resources: Resources which are available inside the web application (for example, CSS files) Resources inside a JAR archive The configuration of the servlet inside web.xml is shown below: <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> <load-on-startup>0</load-on-startup></servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern></servlet-mapping> It is important that you use the correct url-pattern inside servlet-mapping. As you can see in the sample above, you have to use /resources/*. If a component does not work (from the Spring Faces components), first check if you have the correct mapping for the servlet. All resources in the context of Spring Faces should be retrieved through this Servlet. The base URL is /resources. Internals of the ResourceServlet ResourceServlet can only be accessed via a GET request. The ResourceServlet servlet implements only the GET method. Therefore, it's not possible to serve POST requests. Before we describe the separate steps, we want to show you the complete process, illustrated in the diagram below: For a better understanding, we choose an example for the explanation of the mechanism which is shown in the previous diagram. Let us assume that we have registered the ResourcesServlet as mentioned before and we request a resource by the following sample URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css. How to request more than one resource with one requestFirst, you can specify the appended parameter. The value of the parameter is the path to the resource you want to retrieve. An example for that is the following URL: http://localhost:8080/ flowtracweb-jsf/resources/css/test1.css?appended=/css/test2.css. If you want to specify more than one resource, you can use the delimiter comma inside the value for the appended parameter. A simple example for that mechanism is the following URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css?appended=/css/test2.css, http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css?appended=/css/test3.css. Additionally, it is possible to use the comma delimiter inside the PathInfo. For example: http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css,/css/test2.css. It is important to mention that if one resource of the requested resources is not available, none of the requested resources is delivered. This mechanism can be used to deliver more than one CSS in one request. From the view of development, it can make sense to modularize your CSS files to get more maintainable CSS files. With that concept, the client gets one CSS, instead of many CSS files. From the view of performance optimization, it is better to have as few requests for rendering a page as possible. Therefore, it makes sense to combine the CSS files of a page. Internally, the files are written in the same sequence as they are requested. To understand how a resource is addressed, we separate the sample URL into the specific parts. The example URL is a URL on a local servlet container which has an HTTP connector at port 8080. See the following diagram for the mentioned separation: The table below describes the five sections of the URL that are shown in the previous diagram:
Read more
  • 0
  • 1
  • 26934

article-image-mozilla-shares-plans-to-bring-desktop-applications-games-to-webassembly-and-make-deeper-inroads-for-the-future-web
Prasad Ramesh
23 Oct 2018
10 min read
Save for later

Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web

Prasad Ramesh
23 Oct 2018
10 min read
WebAssembly defines an Abstract Syntax Tree (AST) in a binary format and a corresponding assembly-like text format for executable code in Web pages. It can be considered as a new language or a web standard. You can create and debug code in plain text format. It appeared in browsers last year, but that was just a barebones version. Many new features are to be added that could transform what you can do with WebAssembly. The minimum viable product (MVP) WebAssembly started with Emscripten, a toolchain. It made C++ code run on the web by transpiling it to JavaScript. But the automatically generated JS was still significantly slower than the native code. Mozilla engineers found a type system hidden in the generated JS. They figured out how to make this JS run really fast, which is now called asm.js. This was not possible in JavaScript itself, and a new language was needed, designed specifically to be compiled to. Thus was born WebAssembly. Now we take a look at what was needed to get the MVP of WebAssembly running. Compile target: A language agnostic compile target to support more languages than just C and C++. Fast execution: The compiler target had to be designed fast in order to keep up with user expectations of smooth interactions. Compact: A compact compiler target to be able to fit and quickly load pages. Pages with large code bases of web apps/desktop apps ported to the web. Linear memory: A linear model is used to give access to specific parts of memory and nothing else. This is implemented using TypedArrays, similar to a JavaScript array except that it only contains bytes of memory. This was the MVP vision of WebAssembly. It allowed many different kinds of desktop applications to work on your browser without compromising on speed. Heavy desktop applications The next achievement is to run heavyweight desktop applications on the browser. Something like Photoshop or Visual Studio. There are already some implementations of this, Autodesk AutoCAD and Adobe Lightroom. Threading: To support the use of multiple cores of modern CPUs. A proposal for threading is almost done. SharedArrayBuffers, an important part of threading had to be turned off this year due to Spectre vulnerability, they will be turned on again. SIMD: Single instruction multiple data (SIMD) enables to take a chunk of memory and split it up across different execution units/cores. It is under active development. 64-bit addressing: 32-bit memory addresses only allow 4GB of linear memory to store addresses. 64-bit gives 16 exabytes of memory addresses. The approach to incorporate this will be similar to how x86 or ARM added support for 64-bit addressing. Streaming compilation: Streaming compilation is to compile a WebAssembly file while still being downloaded. This allows very fast compilation resulting in faster web applications. Implicit HTTP caching: The compiled code of a web page in a web application is stored in HTTP cache and is reused. So compiling is skipped for any page already visited by you. Other improvements: These are upcoming discussion on how to even better the load time. Once these features are implemented, even heavier apps can run on the browser. Small modules interoperating with JavaScript In addition to heavy applications and games, WebAssembly is also for regular web development. Sometimes, small modules in an app do a lot of the work. The intent is to make it easier to port these modules. This is already happening with heavy applications, but for widespread use a few more things need to be in place. Fast calls between JS and WebAssembly: Integrating a small module will need a lot of calls between JS and WebAssembly, the goal is to make these calls faster. In the MVP the calls weren’t fast. They are fast in Firefox, other browsers are also working on it. Fast and easy data exchange: With calling JS and WebAssembly frequently, data also needs to be passed between them. The challenge being WebAssembly only understand numbers, passing complex values is difficult currently. The object has to be converted into numbers, put in linear memory and pass WebAssembly the location in linear memory. There are many proposals underway. The most notable being the Rust ecosystem that has created tools to automate this. ESM integration: The WebAssembly module isn’t actually a part of the JS module graph. Currently, developers instantiate a WebAssembly module by using an imperative API. Ecmascript module integration is necessary to use import and export with JS. The proposals have made progress, work with other browser vendors is initiated. Toolchain integration: There needs to be a place to distribute and download the modules and the tools to bundle them. While there is no need for a seperate ecosystem, the tools do need to be integrated. There are tools like the wasm-pack to automatically run things. Backwards compatibility: To support older versions of browsers, even the versions that were present before WebAssembly came into picture. This is to help developers avoid writing another implementation for adding support to an old browser. There’s a wasm2js tool that takes a wasm file and outputs JS, it is not going to be as fast, but will work with older versions. The proposal for Small modules in WebAssembly is close to being complete, and on completion it will open up the path for work on the following areas. JS frameworks and compile-to-JS languages There are two use cases: To rewrite large parts of JavaScript frameworks in WebAssembly. Statically-typed compile-to-js languages being compiled to WebAssembly instead of JS For this to happen, WebAssembly needs to support high-level language features. Garbage collector: Integration with the browser’s garbage collector. The reason is to speed things up by working with components managed by the JS VM. Two proposals are underway, should be incorporated sometime next year. Exception handling: Better support for exception handling to handle the exceptions and actions from different languages. C#, C++, JS use exceptions extensively. It is under the R&D phase. Debugging: The same level of debugging support as JS and compile-to-JS languages. There is support in browser devtools, but is not ideal. A subgroup of the WebAssembly CG are working on it. Tail calls: Functional languages support this. It allows calling a new function without adding a new stack frame to the stack. There is a proposal underway. Once these are in place, JS frameworks and many compile-to-JS languages will be unlocked. Outside the browser This refers to everything that happens in systems/places other than your local machine. A really important part is the link, a very special kind of link. The special thing about this link is that people can link to pages without having to put them in any central registry, with no need of asking who the person is, etc. It is this ease of linking that formed global communities. However, there are two unaddressed problems. Problem #1: How does a website know what code to deliver to your machine depending on the OS device you are using? It is not practical to have different versions of code for every device possible. The website has only one code, the source code which is translated to the user’s machine. With portability, you can load code from unknown people while not knowing what kind of device are they using. This brings us to the second problem. Problem #2: If the people whose web pages you load are not known, there comes the question of trust. The code from a web page can contain malicious code. This is where security comes into picture. Security is implemented at the browser level and filters out malicious content if detected. This makes you think of WebAssembly as just another tool in the browser toolbox which it is. Node.js WebAssembly can bring full portability to Node.js. Node gives most of the portability of JavaScript on the web. There are cases where performance needs to be improved which can be done via Node’s native modules. These modules are written in languages such as C. If these native modules were written in WebAssembly, they wouldn’t need to be compiled specifically for the target architecture. Full portability in Node would mean the exact same Node app running across different kinds of devices without needing to compile. But this is not possible currently as WebAssembly does not have direct access to the system’s resources. Portable interface The Node core team would have to figure out the set of functions to be exposed and the API to use. It would be nice if this was something standard, not just specific to Node. If done right, the same API could be implemented for the web. There is a proposal called package name maps providing a mechanism to map a module name to a path to load the module from. This looks likely to happen and will unlock other use cases. Other use cases of outside the browser Now let’s look at the other use cases of outside the browser. CDNs, serverless, and edge computing The code to your website resides in a server maintained by a service provider. They maintain the server and make sure the code is close to all the users of your website. Why use WebAssembly in these cases? Code in a process doesn’t have boundaries. Functions have access to all memory in that process and they can call any functions. On running different services from different people, this is a problem. To make this work, a runtime needs to be created. It takes time and effort to do this. A common runtime that could be used across different use cases would speed up development. There is no standard runtime for this yet, however, some runtime projects are underway. Portable CLI tools There are efforts to get WebAssembly used in more traditional operating systems. When this happens, you can use things like portable CLI tools used across different kinds of operating systems. Internet of Things Smaller IoT devices like wearables etc are small and have resource constraints. They have small processors and less memory. What would help in this situation is a compiler like Cranelift and a runtime like wasmtime. Many of these devices are also different from one another, portability would address this issue. Clearly, the initial implementation of WebAssembly was indeed just an MVP and there are many more improvements underway to make it faster and better. Will WebAssembly succeed in dominating all forms of software development? For in depth information with diagrams, visit the Mozilla website. Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs Testing WebAssembly modules with Jest [Tutorial] Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls
Read more
  • 0
  • 0
  • 26654

article-image-github-universe-2019-github-for-mobile-github-archive-program-and-more-announced-amid-protests-against-githubs-ice-contract
Vincy Davis
14 Nov 2019
4 min read
Save for later

GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract

Vincy Davis
14 Nov 2019
4 min read
Yesterday, GitHub commenced its popular product conference GitHub Universe 2019 in San Francisco. The two-day annual conference celebrates the contribution of GitHub’s 40+ million developers and their contributions to the open source community. Day 1 of the conference had many interesting announcements like GitHub for mobile, GitHub Archive Program, and more. Let’s look at some of the major announcements at the GitHub Universe 2019 conference. GitHub for mobile iOS (beta) Github for mobile is a beta app that aims to give users the flexibility to work and interact with the team, anywhere they want. This will enable users to share feedback on a design discussion or review codes in a non-complex development environment. This native app will adapt to any screen size and will also work in dark mode based on the device preference. Currently available only on iOS, the GitHub team has said that it will soon come up with the Android version of it. https://twitter.com/italolelis/status/1194929030518255616 https://twitter.com/YashSharma___/status/1194899905552105472 GitHub Archive Program “Our world is powered by open source software. It’s a hidden cornerstone of our civilization and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve it for generations to come,” states the official GitHub blog. GitHub has partnered with the Stanford Libraries, the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Piql, Microsoft Research, and the Bodleian Library to preserve all the available open source code in the world. It will safeguard all the data by storing multiple copies across various data formats and locations. This includes a “very-long-term archive” called the GitHub Arctic Code Vault which is designed to last at least 1,000 years. https://twitter.com/vithalreddy/status/1194846571835183104 https://twitter.com/sonicbw/status/1194680722856042499 Read More: GitHub Satellite 2019 focuses on community, security, and enterprise Automating workflows from code to cloud General availability of GitHub Actions Last year, at the GitHub Universe conference, GitHub Actions was announced in beta. This year, GitHub has made it generally available to all the users. In the past year, GitHub Actions has received contributions from the developers of AWS, Google, and others. Actions has now developed as a new standard for building and sharing automation for software development, including a CI/CD solution and native package management.GitHub has also announced the free use of self-hosted runners and artifact caching. https://twitter.com/qmixi/status/1194379789483704320 https://twitter.com/inversemetric/status/1194668430290345984 General availability of GitHub Packages In May this year, GitHub had announced the beta version of the GitHub Package Registry as its new package management service. Later in September, after gathering community feedback, GitHub announced that the service has proxy support for the primary npm registry. Since its launch, GitHub Package has received over 30,000 unique packages that served the needs of over 10,000 organizations. Now, at the GitHub Universe 2019, the GitHub team has announced the general availability of GitHub Packages and also informed that they have added support for using the GitHub Actions token. https://twitter.com/Chris_L_Ayers/status/1194693253532020736 These were some of the major announcements at day 1 of the GitHub Universe 2019 conference, head over to GitHub’s blog for more details of the event. Tech workers protests against GitHub’s ICE contract Major product announcements aside, one thing that garnered a lot of attention at the GitHub Universe conference was the protest conducted by the GitHub workers along with the Tech Workers Coalition to oppose GitHub’s $200,000 contract with Immigration and Customs Enforcement (ICE). Many high-profile speakers have dropped out of the GitHub Universe 2019 conference and at least five GitHub employees have resigned from GitHub due to its support for ICE. https://twitter.com/lily_dart/status/1194216293668401152 Read More: Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE Yesterday at the event, the protesting tech workers brought a giant cage to symbolize how ICE uses them to detain migrant children. https://twitter.com/githubbers/status/1194662876587233280 Tech workers around the world have extended their support to the protest against GitHub. https://twitter.com/ConMijente/status/1194665524191318016 https://twitter.com/CoralineAda/status/1194695061717450752 https://twitter.com/maybekatz/status/1194683980877975552 GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status GitHub Package Registry gets proxy support for the npm registry GitHub updates to Rails 6.0 with an incremental approach GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 26312

article-image-rust-1-39-releases-with-stable-version-of-async-await-syntax-better-ergonomics-for-match-guards-attributes-on-function-parameters-and-more
Vincy Davis
08 Nov 2019
4 min read
Save for later

Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more

Vincy Davis
08 Nov 2019
4 min read
Less than two months after announcing Rust 1.38, the Rust team announced the release of Rust 1.39 yesterday. The new release brings the stable version of the async-await syntax, which will allow users to not only define async functions, but also block and .await them. The other improvements in Rust 1.39 include shared references to by-move bindings in match guards and attributes on function parameters. The stable version of async-await syntax The stable async function can be utilized (by writing async fn instead of fn) to return a Future when called. A Future is a suspended computation which is used to drive a function to conclusion “by .awaiting it.” Along with async fn, the async { ... } and async move { ... } blocks can also be used to define async literals. According to Nicholas D. Matsakis, a member of the release team, the first stable support of async-await kicks-off the commencement of a “Minimum Viable Product (MVP)”, as the Rust team will now try to improve the syntax by polishing and extending it for future operations. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async /.await, which we'll tell you more about in the future,” states the official Rust blog. Some of the major developments in the async ecosystem The tokio runtime will be releasing a number of scheduler improvements with support to async-await syntax in this month. The async-std runtime library will be releasing their first stable release in a few days. The async-await support has already started to become available in higher-level web frameworks and other applications like the futures_intrusive crate. Other improvements in Rust 1.39 Better ergonomics for match guards In the earlier versions, Rust would disallow taking shared references to by-move bindings in the if guards of match expressions. Starting from Rust 1.39, the compiler will allow binding in the following two ways- by-reference: either immutably or mutably which can be achieved through ref my_var or ref mut my_var respectively. by-value: either by-copy, if the bound variable's type implements Copy or otherwise by-move. The Rust team hopes that this feature will give developers a smoother and consistent experience with expressions. Attributes on function parameters Unlike the previous versions, Rust 1.39 will enable three types of attributes on parameters of functions, closures, and function pointers. Conditional compilation: cfg and cfg_attr Controlling lints: allow, warn, deny, and forbid Helper attributes which are used for procedural macro attributes Many users are happy with the Rust 1.39 features and are especially excited about the stable version of async-await syntax. A user on Hacker News comments, “Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.” Another comment read, “This is big! Turns out that syntactic support for asynchronous programming in Rust isn't just syntactic: it enables the compiler to reason about the lifetimes in asynchronous code in a way that wasn't possible to implement in libraries. The end result of having async/await syntax is that async code reads just like normal Rust, which definitely wasn't the case before. This is a huge improvement in usability.” Few have already upgraded to Rust 1.39 and shared their feedback on Twitter. https://twitter.com/snoyberg/status/1192496806317481985 Check out the official announcement for more details. You can also read the blog on async-await for more information. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Fastly announces the next-gen edge computing services available in private beta Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database Yubico reveals Biometric YubiKey at Microsoft Ignite
Read more
  • 0
  • 0
  • 26260

article-image-sleep-loss-cuts-developers-productivity-in-half-research-finds
Vincy Davis
03 May 2019
3 min read
Save for later

All coding and no sleep makes Jack/Jill a dull developer, research confirms

Vincy Davis
03 May 2019
3 min read
In recent years, the software engineering community has been interested in factors related to human habits that can play a role in increasing developers' productivity. The researchers- D. Fucci from HITeC and the University of Hamburg, G. Scanniello and S. Romano from DiMIE - University and N. Juristo from Technical University of Madrid have published a paper “Need for Sleep: the Impact of a Night of Sleep Deprivation on Novice Developers’ Performance” that investigates how sleep deprivation can impact developers' productivity. What was the experiment? The researchers performed a quasi experiment with 45 undergraduate students in Computer Science at the University of Basilicata in Italy. The participants were asked to work on a programming task which required them to use the popular agile practice of test-first development (TFD). The students were divided into two groups - The treatment group where 23 students were asked to skip their sleep the night before the experiment and the control group where the remaining students slept the night before the experiment. The conceptual model and the operationalization of the constructs investigated is as shown below. Image source: Research paper Outcome of the Experiment The result of the experiment indicated that sleep deprivation has a negative effect on the capacity of software developers to produce a software solution that meets given requirements. In particular, novice developers who forewent one night of sleep, wrote code which was approximately 50% more likely not to fulfill the functional requirements with respect to the code produced by developers under normal sleep condition. Another observation was that sleep deprivation decreased developers' productivity with the development task and hindered their ability to apply the test-first development (TFD) practice. The researchers also found that sleep-deprived novice developers had to make more fixes to syntactic mistakes in the source code. As an aftereffect of this result paper, experienced developers are recollecting their earlier sleep deprived programming days. Some are even regretting them. https://twitter.com/zhenghaooo/status/1121937715413434369 Recently the Chinese ‘996’ work routine has come into picture, wherein tech companies are expecting their employees to work from 9 am to 9 pm, 6 days a week, leading to 60+ hours of work per week. This kind of work culture will devoid these developers of any work-life balance. This will also encourage the habit of skipping sleep. Thus decreasing developers productivity. A user on Reddit declares sleep as the key to being a productive coder and not burning out. Another user added, “There's a culture in university computer science around programming for 30+ hours straight (hackathons). I've participated and pulled off some pretty cool things in 48 hours of feverish keyboard whacking and near-constant swearing, but I'd rather stab myself repeatedly with a thumbtack than repeat that experience.” It’s high time that companies focus more on the ‘quality’ of work than insisting developers to work for long hours, which will in turn reduce their productivity. It is clear from the result of this research paper that no sleep in a night, can certainly affect one’s quality of work. To know more about the experiment, head over to the research paper. Microsoft and GitHub employees come together to stand with the 996.ICU repository Jack Ma defends the extreme “996 work culture” in Chinese tech firms Dorsey meets Trump privately to discuss how to make public conversation “healthier and more civil” on Twitter
Read more
  • 0
  • 0
  • 26224

article-image-npm-inc-co-founder-and-chief-data-officer-quits-leaving-the-community-to-question-the-stability-of-the-javascript-registry
Fatema Patrawala
22 Jul 2019
6 min read
Save for later

Npm Inc. co-founder and Chief data officer quits, leaving the community to question the stability of the JavaScript Registry

Fatema Patrawala
22 Jul 2019
6 min read
On Thursday, The Register reported that Laurie Voss, the co-founder and chief data officer of JavaScript package registry, NPM Inc left the company. Voss’s last day in office was 1st July while he officially announced the news on Thursday. Voss joined NPM in January 2014 and decided to leave the company in early May this year. NPM has faced its share of unrest in the company in the past few months. In the month of March  5 NPM employees were fired from the company in an unprofessional and unethical way. Later 3 of those employees were revealed to have been involved in unionization and filed complaints against NPM Inc with the National Labor Relations Board (NLRB).  Earlier this month NPM Inc at the third trial settled the labor claims brought by these three former staffers through the NLRB. Voss’ s resignation will be third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect who resigned from NPM early this month. Voss writes on his blog, “I joined npm in January of 2014 as co-founder, when it was just some ideals and a handful of servers that were down as often as they were up. In the following five and a half years Registry traffic has grown over 26,000%, and worldwide users from about 1 million back then to more than 11 million today. One of our goals when founding npm Inc. was to make it possible for the Registry to run forever, and I believe we have achieved that goal. While I am parting ways with npm, I look forward to seeing my friends and colleagues continue to grow and change the JavaScript ecosystem for the better.” Voss also told The Register that he supported unions, “As far as the labor dispute goes, I will say that I have always supported unions, I think they're great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” The Register team spoke to one of the former staffers of NPM and they said employees tend not to talk to management in the fear of retaliation and Voss seemed uncomfortable to defend the company’s recent actions and felt powerless to affect change. In his post Voss is optimistic about NPM’s business areas, he says, “Our paid products, npm Orgs and npm Enterprise, have tens of thousands of happy users and the revenue from those sustains our core operations.” However, Business Insider reports that a recent NPM Inc funding round of the company raised only enough to continue operating until early 2020. https://twitter.com/coderbyheart/status/1152453087745007616 A big question on everyone’s mind currently is the stability of the public Node JS Registry. Most users in the JavaScript community do not have a fallback in place. While the community see Voss’s resignation with appreciation for his accomplishments, some are disappointed that he could not raise his voice against these odds and had to quit. "Nobody outside of the company, and not everyone within it, fully understands how much Laurie was the brains and the conscience of NPM," Jonathan Cowperthwait, former VP of marketing at NPM Inc, told The Register. CJ Silverio, a principal engineer at Eaze who served as NPM Inc's CTO said that it’s good that Voss is out but she wasn't sure whether his absence would matter much to the day-to-day operations of NPM Inc. Silverio was fired from NPM Inc late last year shortly after CEO Bryan Bogensberger’s arrival. “Bogensberger marginalized him almost immediately to get him out of the way, so the company itself probably won’t notice the departure," she said. "What should affect fundraising is the massive brain drain the company has experienced, with the entire CLI team now gone, and the registry team steadily departing. At some point they’ll have lost enough institutional knowledge quickly enough that even good new hires will struggle to figure out how to cope." Silverio also mentions that she had heard rumors of eliminating the public registry while only continuing with their paid enterprise service, which will be like killing their own competitive advantage. She says if the public registry disappears there are alternative projects like the one spearheaded by Silverio and a fellow developer Chris Dickinson, Entropic. Entropic is available under an open source Apache 2.0 license, Silverio says "You can depend on packages from any other Entropic instance, and your home instance will mirror all your dependencies for you so you remain self-sufficient." She added that the software will mirror any packages installed by a legacy package manager, which is to say npm. As a result, the more developers use Entropic, the less they'll need NPM Inc's platform to provide a list of available packages. Voss feels the scale of npm is 3x bigger than any other registry and boasts of an extremely fast growth rate i.e approx 8% month on month. "Creating a company to manage an open source commons creates some tensions and challenges is not a perfect solution, but it is better than any other solution I can think of, and none of the alternatives proposed have struck me as better or even close to equally good." he said. With  NPM Inc. sustainability at stake, the JavaScript community on Hacker News discussed alternatives in case the public registry comes to an end. One of the comments read, “If it's true that they want to kill the public registry, that means I may need to seriously investigate Entropic as an alternative. I almost feel like migrating away from the normal registry is an ethical issue now. What percentage of popular packages are available in Entropic? If someone else's repo is not in there, can I add it for them?” Another user responds, “The github registry may be another reasonable alternative... not to mention linking git hashes directly, but that has other issues.” Other than Entropic another alternative discussed is nixfromnpm, it is a tool in which you can translate NPM packages to Nix expression. nixfromnpm is developed by Allen Nelson and two other contributors from Chicago. Surprise NPM layoffs raise questions about the company culture Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports
Read more
  • 0
  • 0
  • 26117
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-learning-dependency-injection-di
Packt
08 Mar 2018
15 min read
Save for later

Learning Dependency Injection (DI)

Packt
08 Mar 2018
15 min read
In this article by Sherwin John CallejaTragura, author of the book Spring 5.0 Cookbook, we will learn about implementation of Spring container using XML and JavaConfig,and also managing of beans in an XML-based container. In this article,you will learn how to: Implementing the Spring container using XML Implementing the Spring container using JavaConfig Managing the beans in an XML-based container (For more resources related to this topic, see here.) Implementing the Spring container using XML Let us begin with the creation of theSpring Web Project using the Maven plugin of our STS Eclipse 8.3. This web project will be implementing our first Spring 5.0 container using the XML-based technique. Thisis the most conventional butrobust way of creating the Spring container. The container is where the objects are created, managed, wired together with their dependencies, and monitored from their initialization up to their destruction.This recipe will mainly highlight how to create an XML-based Spring container. Getting ready Create a Maven project ready for development using the STS Eclipse 8.3. Be sure to have installed the correct JRE. Let us name the project ch02-xml. How to do it… After creating the project, certain Maven errors will be encountered. Bug fix the Maven issues of our ch02-xml projectin order to use the XML-based Spring 5.0 container by performing the following steps:  Open pom.xml of the project and add the following properties which contain the Spring build version and Servlet container to utilize: <properties> <spring.version>5.0.0.BUILD-SNAPSHOT</spring.version> <servlet.api.version>3.1.0</servlet.api.version> </properties> Add the following Spring 5 dependencies inside pom.xml. These dependencies are essential in providing us with the interfaces and classes to build our Spring container: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> </dependencies> It is required to add the following repositories where Spring 5.0 dependencies in Step 2 will be downloaded: <repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/libs-snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> Then add the Maven plugin for deployment but be sure to recognize web.xml as the deployment descriptor. This can be done by enabling<failOnMissingWebXml>or just deleting the<configuration>tag as follows: <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> </plugin> <plugin> Follow the Tomcat Maven plugin for deployment, as explained in Chapter 1. After the Maven configuration details, check if there is a WEB-INF folder inside src/main/webapp. If there is none, create one. This is mandatory for this project since we will be using a deployment descriptor (or web.xml). Inside theWEB-INF folder, create a deployment descriptor or drop a web.xml template inside src/main/webapp/WEB-INF directory. Then, create an XML-based Spring container named as ch02-beans.xmlinside thech02-xml/src/main/java/ directory. The configuration file must contain the following namespaces and tags: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring- context.xsd"> </beans> You can generate this file using theSTS Eclipse Wizard (Ctrl-N) and under the module SpringSpring Bean Configuration File option Save all the files. Clean and build the Maven project. Do not deploy yet because this is just a standalone project at the moment. How it works… This project just imported three major Spring 5.0 libraries, namely the Spring-Core, Spring-Beans, and Spring-Context,because the major classes and interfaces in creating the container are found in these libraries. This shows that Spring, unlike other frameworks, does not need the entire load of libraries just to setup the initial platform. Spring can be perceived as a huge enterprise framework nowadays but internally it is still lightweight. The basic container that manages objects in Spring is provided by the org.springframework.beans.factory.BeanFactoryinterfaceand can only be found in theSpring-Beansmodule. Once additional features are needed such as message resource handling, AOP capabilities, application-specific contexts and listener implementation, the sub-interface of BeanFactory, namely the org.springframework.context.ApplicationContextinterface, is then used.This ApplicationContext, found in Spring-Contextmodules, is the one that provides an enterprise-specific container for all its applications becauseit encompasses alarger scope of Spring components than itsBeanFactoryinterface. The container created,ch02-beans.xml, anApplicationContext, is an XML-based configuration that contains XSD schemas from the three main libraries imported. These schemashave tag libraries and bean properties, which areessential in managing the whole framework. But beware of runtime errors once libraries are removed from the dependencies because using these tags is equivalent to using the libraries per se. The final Spring Maven project directory structure must look like this: Implementing the Spring container using JavaConfig Another option of implementing the Spring 5.0 container is through the use of Spring JavaConfig. This is a technique that uses pure Java classes in configuring the framework's container. This technique eliminates the use of bulky and tedious XML metadata and also provides a type-safe and refactoring-free approach in configuring entities or collections of objects into the container. This recipe will showcase how to create the container usingJavaConfig in a web.xml-less approach. Getting ready Create another Maven project and name the projectch02-xml. This STSEclipse project will be using a Java class approach including its deployment descriptor. How to do it… To get rid of the usual Maven bugs, immediately open the pom.xmlof ch02-jc and add<properties>, <dependencies>, and <repositories>equivalent to what was added inthe Implementing the Spring Container using XMLrecipe. Next, get rid of the web.xml. Since the time Servlet 3.0 specification was implemented, servlet containers can now support projects without using web.xml. This is done by implementingthe handler abstract class called org.springframework.web.WebApplicationInitializer to programmatically configure ServletContext. Create aSpringWebinitializerclass and override its onStartup() method without any implementation yet: public class SpringWebinitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext container) throws ServletException { } } The lines in Step 2 will generate some runtime errors until you add the following Maven dependency: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> In pom.xml, disable the<failOnMissingWebXml>. After the Maven details, create a class named BeanConfig, the ApplicationContext definition, bearing an annotation @Configuration at the top of it. The class must be inside the org.packt.starter.ioc.contextpackage and must be an empty class at the moment: @Configuration public class BeanConfig { } Save all the files and clean and build the Maven project.How it works… The Maven project ch02-xml makes use of both JavaConfig and ServletContainerInitializer, meaning there will be no XML configuration from servlet to Spring 5.0 containers. The BeanConfigclass is the ApplicationContext of the project which has an annotation @Configuration,indicating that the class is used by JavaConfig as asource of bean definitions.This is handy when creating an XML-based configuration with lots of metadata. On the other hand,ch02-xmlimplemented org.springframework.web.WebApplicationInitializer,which is a handler of org.springframework.web.SpringServletContainerInitializer, the framework's implementation class to theservlet'sServletContainerInitializer. The SpringServletContainerInitializerisnotified byWebApplicationInitializerduring the execution of its startup(ServletContext) with regard to theprogramaticalregistration of filters, servlets, and listeners provided by the ServletContext . Eventually, the servlet container will acknowledge the status reported by SpringServletContainerInitialize,thus eliminating the use of web.xml. On Maven's side, the plugin for deployment must be notified that the project will not use web.xml.This is done through setting the<failOnMissingWebXml>to false inside its<configuration>tag. The final Spring Web Project directory structure must look like the following structure: Managing the beans in an XML-based container Frameworks become popular because of the principle behind the architecture they are made up from. Each framework is built from different design patterns that manage the creation and behavior of the objects they manage. This recipe will detail how Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Getting ready The two Maven projects previously created will be utilized in illustrating how Spring 5.0 loads objects into the heap memory.We will also be utilizing the ApplicationContextrather than the BeanFactorycontainer in preparation for the next recipes involving more Spring components. How to do it… With our ch02-xml, let us demonstrate how Spring loads objects using the XML-based Application Context container: Create a package layer,org.packt.starter.ioc.model,for our model classes. Our model classes will be typical Plain Old Java Objects(POJO),by which Spring 5.0 architecture is known for. Inside the newly created package, create the classes Employeeand Department,whichcontain the following blueprints: public class Employee { private String firstName; private String lastName; private Date birthdate; private Integer age; private Double salary; private String position; private Department dept; public Employee(){ System.out.println(" an employee is created."); } public Employee(String firstName, String lastName, Date birthdate, Integer age, Double salary, String position, Department dept) { his.firstName = firstName; his.lastName = lastName; his.birthdate = birthdate; his.age = age; his.salary = salary; his.position = position; his.dept = dept; System.out.println(" an employee is created."); } // getters and setters } public class Department { private Integer deptNo; private String deptName; public Department() { System.out.println("a department is created."); } // getters and setters } Afterwards, open the ApplicationContextch02-beans.xml. Register using the<bean>tag our first set of Employee and Department objects as follows: <bean id="empRec1" class="org.packt.starter.ioc.model.Employee" /> <bean id="dept1" class="org.packt.starter.ioc.model.Department" /> The beans in Step 3 containprivate instance variables that havezeroes and null default values. Toupdate them, our classes havemutators or setter methodsthat can be used to avoid NullPointerException, which happens always when we immediately use empty objects. In Spring,calling these setters is tantamount to injecting data into the<bean>,similar to how these following objectsare created: <bean id="empRec2" class="org.packt.starter.ioc.model.Employee"> <property name="firstName"><value>Juan</value></property> <property name="lastName"><value>Luna</value></property> <property name="age"><value>70</value></property> <property name="birthdate"><value>October 28, 1945</value></property> <property name="position"> <value>historian</value></property> <property name="salary"><value>150000</value></property> <property name="dept"><ref bean="dept2"/></property> </bean> <bean id="dept2" class="org.packt.starter.ioc.model.Department"> <property name="deptNo"><value>13456</value></property> <property name="deptName"> <value>History Department</value></property> </bean> A<property>tag is equivalent to a setter definition accepting an actual value oran object reference. The nameattributedefines the name of the setter minus the prefix set with the conversion to itscamel-case notation. The value attribute or the<value>tag both pertain to supported Spring-type values (for example,int, double, float, Boolean, Spring). The ref attribute or<ref>provides reference to another loaded<bean>in the container. Another way of writing the bean object empRec2 is through the use of ref and value attributes such as the following: <bean id="empRec3" class="org.packt.starter.ioc.model.Employee"> <property name="firstName" value="Jose"/> <property name="lastName" value="Rizal"/> <property name="age" value="101"/> <property name="birthdate" value="June 19, 1950"/> <property name="position" value="scriber"/> <property name="salary" value="90000"/> <property name="dept" ref="dept3"/> </bean> <bean id="dept3" class="org.packt.starter.ioc.model.Department"> <property name="deptNo" value="56748"/> <property name="deptName" value="Communication Department" /> </bean> Another way of updating the private instance variables of the model objects is to make use of the constructors. Actual Spring data and object references can be inserted to the through the metadata: <bean id="empRec5" class="org.packt.starter.ioc.model.Employee"> <constructor-arg><value>Poly</value></constructor-arg> <constructor-arg><value>Mabini</value></constructor-arg> <constructor-arg><value> August 10, 1948</value></constructor-arg> <constructor-arg><value>67</value></constructor-arg> <constructor-arg><value>45000</value></constructor-arg> <constructor-arg><value>Linguist</value></constructor-arg> <constructor-arg><ref bean="dept3"></ref></constructor-arg> </bean> After all the modifications, save ch02-beans.xml.Create a TestBeans class inside thesrc/test/java directory. This class will load the XML configuration resource to the ApplicationContext container throughorg.springframework.context.support.ClassPathXmlApplicationContextand fetch all the objects created through its getBean() method. public class TestBeans { public static void main(String args[]){ ApplicationContext context = new ClassPathXmlApplicationContext("ch02-beans.xml"); System.out.println("application context loaded."); System.out.println("****The empRec1 bean****"); Employee empRec1 = (Employee) context.getBean("empRec1"); System.out.println("****The empRec2*****"); Employee empRec2 = (Employee) context.getBean("empRec2"); Department dept2 = empRec2.getDept(); System.out.println("First Name: " + empRec2.getFirstName()); System.out.println("Last Name: " + empRec2.getLastName()); System.out.println("Birthdate: " + empRec2.getBirthdate()); System.out.println("Salary: " + empRec2.getSalary()); System.out.println("Dept. Name: " + dept2.getDeptName()); System.out.println("****The empRec5 bean****"); Employee empRec5 = context.getBean("empRec5", Employee.class); Department dept3 = empRec5.getDept(); System.out.println("First Name: " + empRec5.getFirstName()); System.out.println("Last Name: " + empRec5.getLastName()); System.out.println("Dept. Name: " + dept3.getDeptName()); } } The expected output after running the main() thread will be: an employee is created. an employee is created. a department is created. an employee is created. a department is created. an employee is created. a department is created. application context loaded. *********The empRec1 bean *************** *********The empRec2 bean *************** First Name: Juan Last Name: Luna Birthdate: Sun Oct 28 00:00:00 CST 1945 Salary: 150000.0 Dept. Name: History Department *********The empRec5 bean *************** First Name: Poly Last Name: Mabini Dept. Name: Communication Department How it works… The principle behind creating<bean>objects into the container is called the Inverse of Control design pattern. In order to use the objects, its dependencies, and also its behavior, these must be placed within the framework per se. After registering them in the container, Spring will just take care of their instantiation and their availability to other objects. Developer can just "fetch" them if they want to include them in their software modules,as shown in the following diagram: The IoC design pattern can be synonymous to the Hollywood Principle (“Don't call us, we’ll call you!”), which is a popular line in most object-oriented programming languages. The framework does not care whether the developer needs the objects or not because the lifespan of the objects lies on the framework's rules. In the case of setting new values or updating values of the object's private variables, IoC has an implementation which can be used for "injecting" new actual values or object references to and it is popularly known as the Dependency Injection(DI) design pattern. This principle exposes all the to the public through its setter methods or the constructors. Injecting Spring values and object references to the method signature using the <property>tag without knowing its implementation is called the Method Injection type of DI. On the other hand, if we create the bean with initialized values injected to its constructor through<constructor-arg>, it is known as Constructor Injection. To create the ApplicationContext container, we need to instantiate ClassPathXmlApplicationContext or FileSystemApplicationContext, depending on the location of the XML definition file. Since the file is found in ch02-xml/src/main/java/, ClassPathXmlApplicationContext implementation is the best option. This proves that the ApplicationContext is an object too,bearing all those XML metadata. It has several overloaded getBean() methods used to fetch all the objects loaded with it. Summary In this article we went overhow to create an XML-based Spring container, how to create the container using JavaConfig in a web.xml-less approach andhow Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 25875

article-image-microsoft-announces-the-general-availability-of-live-share-and-brings-it-to-visual-studio-2019
Bhagyashree R
03 Apr 2019
2 min read
Save for later

Microsoft announces the general availability of Live Share and brings it to Visual Studio 2019

Bhagyashree R
03 Apr 2019
2 min read
Microsoft yesterday announced that Live Share is now generally available and now comes included in Visual Studio 2019. This release comes with a lot of updates based on the feedbacks the team got since the public preview of Live Share started including a read-only mode, support for C++ and Python, and more. What is Live Share? Microsoft first introduced Live Share at Connect 2017 and launched its public preview in May 2018. It is a tool that enables your to collaborate with your team on the same codebase without needing to synchronize code or to configure the same development tools, settings, or environment. You can edit and debug your code with others in real time, regardless what programming languages you are using or the type of app you are building. It allows you to do a bunch of different things like instantly sharing your project with a teammate, share debugging sessions, terminal instances, localhost web apps, voice calls, and more. With Live Share, you do not have to leave the comfort of your favorite tools. You can take the advantage collaboration while retaining your personal editor preferences. It also provides developers their own cursor so that to enable seamless transition between following one another. What’s new in Live Share? This release includes features like  read-only mode, support for more languages like C++ and Python, and enabled guests to start debugging sessions. Now, you can use Live Share while pair programming, conducting code reviews, giving lectures and presenting to students and colleagues, or even mob programming during hackathons. This release also comes with support for a few third-party extensions to improve the overall experience when working with Live Share. The two extensions are OzCode and CodeStream. OzCode offers a suite of visualizations like datatips to see how items are passed through a LINQ query. It provides heads-up display to show how a set of boolean expressions evaluates. With CodeStream, you can create discussions about your codebase, which will serve as an integrated chat feature within a LiveShare session. To read the updates in Live Share, check out the official announcement. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud Microsoft announces Game stack with Xbox Live integration to Android and iOS  
Read more
  • 0
  • 0
  • 25463

article-image-what-domain-driven-design
Packt Editorial Staff
03 Apr 2018
18 min read
Save for later

What is domain driven design?

Packt Editorial Staff
03 Apr 2018
18 min read
Domain driven design exists because all software exists for a purpose. It does something. For example, you can't provide a software solution for a financial system such as online stock trading if you don't understand the stock exchanges and their functioning. Having domain knowledge is essential to solving problems with software. Domain driven design is simply designing software with the specific domain - whether that's finance, medicine, law, eCommerce - in mind. This has been taken from Mastering Microservices with Java 9 - Second Edition. Central to Domain Driven Design is the concept of a model. A model is an abstraction, or a blueprint, of the domain. Domain driven design is a collaborative activity Designing this model is not rocket science, but it does take a lot of effort, refining, and input from domain experts. It is the collective job of software designers, domain experts, and developers. They organize information, divide it into smaller parts, group them logically, and create modules. Each module can be taken up individually, and can be divided using a similar approach. This process can be followed until we reach the unit level, or when we cannot divide it any further. A complex project may have more of such iterations; similarly, a simple project could have just a single iteration of it. Once a model is defined and well documented, it can move onto the next stage - code design. So, here we have a software design—a domain model and code design, and code implementation of the domain model. The domain model provides a high level of the architecture of a solution (software/application), and the code implementation gives the domain model a life, as a working model. Domain Driven Design makes design and development work together. It provides the ability to develop software continuously, while keeping the design up to date based on feedback received from the development. It solves one of the limitations offered by Agile and Waterfall methodologies, making software maintainable, including design and code, as well as keeping application minimum viable. It gives developers the right platform to understand the domain, and provides the opportunity to share early feedback of the domain model implementation. It removes the bottleneck that appears in later stages when stockholders wait for deliverables. The fundamental components of Domain Driven Design To understand domain driven design, you can break it down into 3 fundamental concepts: Ubiquitous language and unified model language (UML) Multilayer architecture Artifacts (components) Ubiquitous language Ubiquitous language is a common language to communicate within a project. It's because designing a model is a collaborative effort of software designers, domain experts, and developers that it requires a common language to communicate with. It removes misunderstandings, misinterpretations. Communication gaps so often lead to bad software - ubiquitous language minimizes these gaps. It does, however, need to be used everywhere on a project. Unified Modeling Language (UML) is widely used and very popular when creating models. It also has a few limitations; for example, when you have thousands of classes drawn from a paper, it's difficult to represent class relationships and simultaneously understand their abstraction while taking a meaning from it. Also, UML diagrams do not represent the concepts of a model and what objects are supposed to do. Therefore, UML should always be used with other documents, code, or any other reference for effective communication. Multilayered architecture Multilayered architecture is a common solution for Domain Driven Design. It contains four layers: Presentation layer or (UI) Application layer - responsible for application logic. It maintains and coordinates the overall flow of the product/service. It does not contain business logic or UI. It may hold the state of application objects, like tasks in progress. Domain layer - contains the domain information and business logic. It holds the state of the business object. Infrastructure layer -  provides support to all the other layers and is responsible for communication between them. To understand the interaction of the different layers, take the example of table booking at a restaurant. The end user places a request for a table booking using UI. The UI passes the request to the application layer. The application layer fetches the domain objects, such as the restaurant, the table, a date, and so on, from the domain layer. The domain layer fetches these existing persisted objects from the infrastructure, and invokes relevant methods to make the booking and persist them back to the infrastructure layer. Once domain objects are persisted, the application layer shows the booking confirmation to the end user. Artifacts used in Domain Driven Design There are seven different artifacts used in Domain Driven Design to express, create, and retrieve domain models: Entities Value objects Services Aggregates Repository Factory Module Entities are certain types of objects that are identifiable and remain the same throughout the states of the products/services. These objects are not identified by their attributes, but by their identity and thread of continuity. These type of objects are known as entities. It sounds pretty simple, but it carries complexity. You need to understand how we can define the entities. Let's take an example of a table booking system, where we have a restaurant class with attributes such as restaurant name, address, phone number, establishment data, and so on. We can take two instances of the restaurant class that are not identifiable using the restaurant name, as there could be other restaurants with the same name. Similarly, if we go by any other single attribute, we will not find any attributes that can singularly identify a unique restaurant. If two restaurants have all the same attribute values, they are therefore the same and are interchangeable with each other. Still, they are not the same entities, as both have different references (memory addresses). Conversely, let's take a class of U.S. citizens. Every U.S. citizen has his or her own social security number. This number is not only unique, but remains unchanged throughout the life of the citizen and assures continuity. This citizen object would exist in the memory, would be serialized, and would be removed from the memory and stored in the database. It even exists after the person is deceased. It will be kept in the system for as long as the system exists. A citizen's social security number remains the same irrespective of its representation. Therefore, creating entities in a product means creating an identity. So, now give an identity to any restaurant in the previous example, then either use a combination of attributes such as restaurant name, establishment date, and street, or add an identifier such as restaurant_id to identify it. The basic rule is that two identifiers cannot be the same. Therefore, when we introduce an identifier for an entity, we need to be sure of it. There are different ways to create a unique identity for objects, described as follows: Using the primary key in a table. Using an automated generated ID by a domain module. A domain program generates the identifier and assigns it to objects that are being persisted among different layers. A few real-life objects carry user-defined identifiers themselves. For example, each country has its own country codes for dialing ISD calls. Composite key. This is a combination of attributes that can also be used for creating an identifier, as explained for the preceding restaurant object. Value objects Value objects (VOs) simplify the design. In contrast to entities, value objects have only attributes and no conceptual identity. A best practice is to keep value objects as immutable objects. If possible, you should even keep entity objects immutable too. You might want to keep all objects as entities, but you're likely to run into problems if you do this; there has to be one instance for each object. Let's say you are creating customers as entity objects. Each customer object would represent the restaurant guest; this cannot be used for booking orders for other guests. This may create millions of customer entity objects in the memory if millions of customers are using the system. Not only are there millions of uniquely identifiable objects that exist in the system, but each object is being tracked. Tracking as well as creating an identity is complex. A highly credible system is required to create and track these objects, which is not only very complex, but also resource heavy. It may result in system performance degradation. Therefore, it is important to use value objects instead of using entities. The reasons are explained in the next few paragraphs. Applications don't always need to have to be trackable and have an identifiable customer object. There are cases when you just need to have some or all attributes of the domain element. These are the cases when value objects can be used by the application. It makes things simple and improves the performance. Value objects can easily be created and destroyed, owing to the absence of identity. This simplifies the design—it makes value objects available for garbage collection if no other object has referenced them. Value objects should be designed and coded as immutable. Once they are created, they should never be modified during their life-cycle. If you need a different value of the VO, or any of its objects, then simply create a new value object, but don't modify the original value object. Here, immutability carries all the significance from object-oriented programming (OOP). A value object can be shared and used without impacting on its integrity if, and only if, it is immutable. Services While creating the domain model, you may come across situations where behavior may not be related to any object. These behaviors can be accommodated in service objects. Service objects are part of the domain layer and do not have any internal state. The sole purpose of service objects is to provide behavior to the domain that does not belong to a single entity or value object. Ubiquitous language helps you to identify different objects, identities, or value objects with different attributes and behaviors during the process of domain driven design and domain modelling. During the course of creating the domain model, you may find different behaviors or methods that do not belong to any specific object. Such behaviors are important, and so cannot be neglected. Neither can you add them to entities or value objects. It would spoil the object to add behavior that does not belong to it. Keep in mind, that behavior may impact on various objects. The use of object-oriented programming makes it possible to attach to some objects; this is known as a service. Services are common in technical frameworks. These are also used in domain layers in domain driven design. A service object does not have any internal state; its only purpose is to provide a behavior to the domain. Service objects provide behaviors that cannot be related to specific entities or value objects. Service objects may provide one or more related behaviors to one or more entities or value objects. It is a practice to define the services explicitly in the domain model. While creating the services, you need to tick all of the following points: Service objects' behavior performs on entities and value objects, but it does not belong to entities or value objects Service objects' behavior state is not maintained, and hence, they are stateless Services are part of the domain model Services may also exist in other layers. It is very important to keep domain-layer services isolated. It removes the complexities and keeps the design decoupled. Let's take an example where a restaurant owner wants to see the report of his monthly table bookings. In this case, he will log in as an admin and click the Display Report button after providing the required input fields, such as duration. Application layers pass the request to the domain layer that owns the report and templates objects, with some parameters such as report ID, and so on. Reports get created using the template, and data is fetched from either the database or other sources. Then the application layer passes through all the parameters, including the report ID to the business layer. Here, a template needs to be fetched from the database or another source to generate the report based on the ID. This operation does not belong to either the report object or the template object. Therefore, a service object is used that performs this operation to retrieve the required template from the database. Aggregates Aggregate domain pattern is related to the object's life cycle. It defines ownership and boundaries which is crucial in Domain Driven Design When you reserve a table at your favorite restaurant online using an application, you don't need to worry about the internal system and process that takes place to book your reservation, including searching for available restaurants, then for available tables on the given date, time, and so on and so forth. Therefore, you can say that a reservation application is an aggregate of several other objects, and works as a root for all the other objects for a table reservation system. This root should be an entity that binds collections of objects together. It is also called the aggregate root. This root object does not pass any reference of inside objects to external worlds, and protects the changes performed within internal objects. We need to understand why aggregators are required. A domain model can contain large numbers of domain objects. The bigger the application functionalities and size and the more complex its design, the greater number of objects present. A relationship exists between these objects. Some may have a many-to-many relationship, a few may have a one-to-many relationship, and others may have a one-to-one relationship. These relationships are enforced by the model implementation in the code, or in the database that ensures that these relationships among the objects are kept intact. Relationships are not just unidirectional; they can also be bidirectional. They can also increase in complexity. The designer's job is to simplify these relationships in the model. Some relationships may exist in a real domain, but may not be required in the domain model. Designers need to ensure that such relationships do not exist in the domain model. Similarly, multiplicity can be reduced by these constraints. One constraint may do the job where many objects satisfy the relationship. It is also possible that a bidirectional relationship could be converted into a unidirectional relationship. No matter how much simplification you input, you may still end up with relationships in the model. These relationships need to be maintained in the code. When one object is removed, the code should remove all the references to this object from other places. For example, a record removal from one table needs to be addressed wherever it has references in the form of foreign keys and such, to keep the data consistent and maintain its integrity. Also, invariants (rules) need to be forced and maintained whenever data changes. Relationships, constraints, and invariants bring a complexity that requires an efficient handling in code. We find the solution by using the aggregate represented by the single entity known as the root, which is associated with the group of objects that maintains consistency with regards to data changes. This root is the only object that is accessible from outside, so this root element works as a boundary gate that separates the internal objects from the external world. Roots can refer to one or more inside objects, and these inside objects can have references to other inside objects that may or may not have relationships with the root. However, outside objects can also refer to the root, and not to any inside objects. An aggregate ensures data integrity and enforces the invariant. Outside objects cannot make any change to inside objects; they can only change the root. However, they can use the root to make a change inside the object by calling exposed operations. The root should pass the value of inside objects to outside objects if required. If an aggregate object is stored in the database, then the query should only return the aggregate object. Traversal associations should be used to return the object when it is internally linked to the aggregate root. These internal objects may also have references to other aggregates. An aggregate root entity holds its global identity, and holds local identities inside their entities. A simple example of an aggregate in the table booking system is the customer. Customers can be exposed to external objects, and their root object contains their internal object address and contact information. When requested, the value object of internal objects, such as address, can be passed to external objects: Repository In a domain model, at a given point in time, many domain objects may exist. Each object may have its own life-cycle, from the creation of objects to their removal or persistence. Whenever any domain operation needs a domain object, it should retrieve the reference of the requested object efficiently. It would be very difficult if you didn't maintain all of the available domain objects in a central object. A central object carries the references of all the objects, and is responsible for returning the requested object reference. This central object is known as the repository. The repository is a point that interacts with infrastructures such as the database or file system. A repository object is the part of the domain model that interacts with storage such as the database, external sources, and so on, to retrieve the persisted objects. When a request is received by the repository for an object's reference, it returns the existing object's reference. If the requested object does not exist in the repository, then it retrieves the object from storage. For example, if you need a customer, you would query the repository object to provide the customer with ID 31. The repository would provide the requested customer object if it is already available in the repository, and if not, it would query the persisted stores such as the database, fetch it, and provide its reference. The main advantage of using the repository is having a consistent way to retrieve objects where the requestor does not need to interact directly with the storage such as the database. A repository may query objects from various storage types, such as one or more databases, filesystems, or factory repositories, and so on. In such cases, a repository may have strategies that also point to different sources for different object types As you can see in the repository object flow diagram on the right, the repository interacts with the infrastructure layer, and this interface is part of the domain layer. The requestor may belong to a domain layer, or an application layer. The repository helps the system to manage the life cycle of domain objects. Factory A factory is required when a simple constructor is not enough to create the object. It helps to create complex objects, or an aggregate that involves the creation of other related objects. A factory is also a part of the life cycle of domain objects, as it is responsible for creating them. Factories and repositories are in some way related to each other, as both refer to domain objects. The factory refers to newly created objects, whereas the repository returns the already existing objects either from the memory, or from external storage. Let's see how control flows, by using a user creation process application. Let's say that a user signs up with a username user1. This user creation first interacts with the factory, which creates the name user1 and then caches it in the domain using the repository, which also stores it in the storage for persistence. When the same user logs in again, the call moves to the repository for a reference. This uses the storage to load the reference and pass it to the requestor. The requestor may then use this user1 object to book the table in a specified restaurant, and at a specified time. These values are passed as parameters, and a table booking record is created in storage using the repository:       The factory may use one of the object-oriented programming patterns, such as the factory or abstract factory pattern, for object creation. Modules Modules are the best way to separate related business objects. These are best suited to large projects where the size of domain objects is bigger. For the end user, it makes sense to divide the domain model into modules and set the relationship between these modules. Once you understand the modules and their relationship, you start to see the bigger picture of the domain model, thus it's easier to drill down further and understand the model. Modules also help you to write code that is highly cohesive, or maintains low coupling. Ubiquitous language can be used to name these modules. For the table booking system, we could have different modules, such as user-management, restaurants and tables, analytics and reports, and reviews, and so on. This introduction to domain driven design should give you a strong foundation for using it when you build software. It's principles are useful - in particular, making sure you collaborate and use the same language as different stakeholders is one of domain driven design's most valuable contributions to the way we approach software development.
Read more
  • 0
  • 0
  • 25354

article-image-plotting-data-sage
Packt
05 May 2011
14 min read
Save for later

Plotting Data with Sage

Packt
05 May 2011
14 min read
Sage Beginner's Guide Unlock the full potential of Sage for simplifying and automating mathematical computing Confusion alert: Sage plots and matplotlib The 2D plotting capabilities of Sage are built upon a Python plotting package called matplotlib. The most widely used features of matplotlib are accessible through Sage functions. You can also import the matplotlib package into Sage, and use all of its features directly. This is very powerful, but it's also confusing, because there's more than one way to do the same thing. To further add to the confusion, matplotlib has two interfaces: the command-oriented Pyplot interface and an object-oriented interface. The examples in this chapter will attempt to clarify which interface is being used. Plotting in two dimensions Two-dimensional plots are probably the most important tool for visually presenting information in math, science, and engineering. Sage has a wide variety of tools for making many types of 2D plots. Plotting symbolic expressions with Sage We will start by exploring the plotting functions that are built in to Sage. They are generally less flexible than using matplotlib directly, but also tend to be easier to use. Time for action – plotting symbolic expressions Let's plot some simple functions. Enter the following code: p1 = plot(sin, (-2*pi, 2*pi), thickness=2.0, rgbcolor=(0.5, 1, 0), legend_label='sin(x)') p2 = plot(cos, (-2*pi, 2*pi), thickness=3.0, color='purple', alpha=0.5, legend_label='cos(x)') plt = p1 + p2 plt.axes_labels(['x', 'f(x)']) show(plt) If you run the code from the interactive shell, the plot will open in a separate window. If you run it from the notebook interface, the plot will appear below the input cell. In either case, the result should look like this: What just happened? This example demonstrated the most basic type of plotting in Sage. The plot function requires the following arguments: graphics_object = plot(callable symbolic expression, (independent_var, ind_var_min, ind_var_max)) The first argument is a callable symbolic expression, and the second argument is a tuple consisting of the independent variable, the lower limit of the domain, and the upper limit. If there is no ambiguity, you do not need to specify the independent variable. Sage automatically selects the right number of points to make a nice curve in the specified domain. The plot function returns a graphics object. To combine two graphics objects in the same image, use the + operator: plt = p1 + p2. Graphics objects have additional methods for modifying the final image. In this case, we used the axes_labels method to label the x and y axes. Finally, the show function was used to finish the calculation and display the image. The plot function accepts optional arguments that can be used to customize the appearance and format of the plot. To see a list of all the options and their default values, type: sage: plot.options {'fillalpha': 0.5, 'detect_poles': False, 'plot_points': 200, 'thickness': 1, 'alpha': 1, 'adaptive_tolerance': 0.01, 'fillcolor': 'automatic', 'adaptive_recursion': 5, 'exclude': None, 'legend_label': None, 'rgbcolor': (0, 0, 1), 'fill': False} Here is a summary of the options for customizing the appearance of a plot: KeywordDescriptionalphaTransparency of the line (0=opaque, 1=transparent)fillTrue to fill area below the linefillalphaTransparency of the filled-in area (0=opaque, 1=transparent)fillcolorColor of the filled-in areargbcolorColor of the line Sage uses an algorithm to determine the best number of points to use for the plot, and how to distribute them on the x axis. The algorithm uses recursion to add more points to resolve regions where the function changes rapidly. Here are the options that control how the plot is generated: KeywordDescriptionadaptive_recursionMax depth of recursion when resolving areas of the plot where the function changes rapidlyadaptive_toleranceTolerance for stopping recursiondetect_polesDetect points where function value approaches infinity (see next example)excludeA list or tuple of points to exclude from the plotplot_pointsNumber of points to use in the plot Specifying colors in Sage There are several ways to specify a color in Sage. For basic colors, you can use a string containing the name of the color, such as red or blue. You can also use a tuple of three floating-point values between 0 and 1.0. The first value is the amount of red, the second is the amount of green, and the third is the amount of blue. For example, the tuple (0.5, 0.0, 0.5) represents a medium purple color. Some functions "blow up" to plus or minus infinity at a certain point. A simplistic plotting algorithm will have trouble plotting these points, but Sage adapts. Time for action – plotting a function with a pole Let's try to plot a simple function that takes on infinite values within the domain of the plot: pole_plot = plot(1 / (x - 1), (0.8, 1.2), detect_poles='show', marker='.') print("min y = {0} max y = {1}".format(pole_plot.ymax(), pole_plot.ymin())) pole_plot.ymax(100.0) pole_plot.ymin(-100.0) # Use TeX to make nicer labels pole_plot.axes_labels([r'$x$', r'$1/(x-1)$']) pole_plot.show() The output from this code is as follows: What just happened? We did a few things differently compared to the previous example. We defined a callable symbolic expression right in the plot function. We also used the option detect_poles='show' to plot a dashed vertical line at the x value where the function returns infinite values. The option marker='.' tells Sage to use a small dot to mark the individual (x,y) values on the graph. In this case, the dots are so close together that they look like a fat line. We also used the methods ymin and ymax to get and set the minimum and maximum values of the vertical axis. When called without arguments, these methods return the current values. When given an argument, they set the minimum and maximum values of the vertical axis. Finally, we labeled the axes with nicely typeset mathematical expressions. As in the previous example, we used the method axes_labels to set the labels on the x and y axes. However, we did two special things with the label strings: r'$frac{1}{(x-1)}$' The letter r is placed in front of the string, which tells Python that this is a raw string. When processing a raw string, Python does not interpret backslash characters as commands (such as interpreting n as a newline). Note that the first and last characters of the string are dollar signs, which tells Sage that the strings contain mark-up that needs to be processed before being displayed. The mark-up language is a subset of TeX, which is widely used for typesetting complicated mathematical expressions. Sage performs this processing with a built-in interpreter, so you don't need to have TeX installed to take advantage of typeset labels. It's a good idea to use raw strings to hold TeX markup because TeX uses a lot of backslashes. To learn about the typesetting language, see the matplotlib documentation at: http://matplotlib.sourceforge.net/users/mathtext.html Time for action – plotting a parametric function Some functions are defined in terms of a parameter. Sage can easily plot parametric functions: var('t') pp = parametric_plot((cos(t), sin(t)), (t, 0, 2*pi), fill=True, fillcolor='blue') pp.show(aspect_ratio=1, figsize=(3, 3), frame=True) The output from this code is as follows: What just happened? We used two parametric functions to plot a circle. This is a convenient place to demonstrate the fill option, which fills in the space between the function and the horizontal axis. The fillcolor option tells Sage which color to use for the fill, and the color can be specified in the usual ways. We also demonstrated some useful options for the show method (these options also work with the show function). The option aspect_ratio=1 forces the x and y axes to use the same scale. In other words, one unit on the x axis takes up the same number of pixels on the screen as one unit on the y axis. Try changing the aspect ratio to 0.5 and 2.0, and see how the circle looks. The option figsize=(x_size,y_size) specifies the aspect ratio and relative size of the figure. The units for the figure size are relative, and don't correspond to an absolute unit like inches or centimetres. The option frame=True places a frame with tick marks around the outside of the plot. Time for action – making a polar plot Some functions are more easily described in terms of angle and radius. The angle is the independent variable, and the radius at that angle is the dependent variable. Polar plots are widely used in electrical engineering to describe the radiation pattern of an antenna. Some antennas are designed to transmit (or receive) electromagnetic radiation in a very narrow beam. The beam shape is known as the radiation pattern. One way to achieve a narrow beam is to use an array of simple dipole antennas, and carefully control the phase of the signal fed to each antenna. In the following example, we will consider seven short dipole antennas set in a straight line: # A linear broadside array of short vertical dipoles # located along the z axis with 1/2 wavelength spacing var('r, theta') N = 7 normalized_element_pattern = sin(theta) array_factor = 1 / N * sin(N * pi / 2 * cos(theta)) / sin(pi / 2 * cos(theta)) array_plot = polar_plot(abs(array_factor), (theta, 0, pi), color='red', legend_label='Array') radiation_plot = polar_plot(abs(normalized_element_pattern * array_factor), (theta, 0, pi), color='blue', legend_label='Radiation') combined_plot = array_plot + radiation_plot combined_plot.xmin(-0.25) combined_plot.xmax(0.25) combined_plot.set_legend_options(loc=(0.5, 0.3)) show(combined_plot, figsize=(2, 5), aspect_ratio=1) Execute the code. You should get a plot like this: What just happened? We plotted a polar function, and used several of the plotting features that we've already discussed. There are two subtle points worth mentioning. The function array_factor is a function of two variables, N and theta. In this example, N is more like a parameter, while theta is the independent variable we want to use for plotting. We use the syntax (theta, 0, pi) in the plot function to indicate that theta is the independent variable. The second new aspect of this example is that we used the methods xmin and xmax to set the limits of the x axis for the graphics object called combined_plot. We also used the set_legend_options of the graphics object to adjust the position of the legend to avoid covering up important details of the plot. Time for action – plotting a vector field Vector fields are used to represent force fields such as electromagnetic fields, and are used to visualize the solutions of differential equations. Sage has a special plotting function to visualize vector fields. var('x, y') a = plot_vector_field((x, y), (x, -3, 3), (y, -3, 3), color='blue') b = plot_vector_field((y, -x), (x, -3, 3), (y, -3, 3), color='red') show(a + b, aspect_ratio=1, figsize=(4, 4)) You should get the following image: What just happened? The plot_vector_field function uses the following syntax: plot_vector_field((x_function,y_function), (x,x_min,x_max), (y,y_ min,y_max)) The keyword argument color specifies the color of the vectors. Plotting data in Sage So far, we've been making graphs of functions. We specify the function and the domain, and Sage automatically chooses the points to make a nice-looking curve. Sometimes, we need to plot discrete data points that represent experimental measurements or simulation results. The following functions are used for plotting defined sets of points. Time for action – making a scatter plot Scatter plots are used in science and engineering to look for correlation between two variables. A cloud of points that is roughly circular indicates that the two variables are independent, while a more elliptical arrangement indicates that there may be a relationship between them. In the following example, the x and y coordinates are contrived to make a nice plot. In real life, the x and y coordinates would typically be read in from data files. Enter the following code: def noisy_line(m, b, x): return m * x + b + 0.5 * (random() - 0.5) slope = 1.0 intercept = -0.5 x_coords = [random() for t in range(50)] y_coords = [noisy_line(slope, intercept, x) for x in x_coords] sp = scatter_plot(zip(x_coords, y_coords)) sp += line([(0.0, intercept), (1.0, slope+intercept)], color='red') sp.show() The result should look similar to this plot. Note that your results won't match exactly, since the point positions are determined randomly. What just happened? We created a list of randomized x coordinates using the built-in random function. This function returns a random number in the range 0 <= x < 1. We defined a function called noisy_line that we then used to create a list of randomized y coordinates with a linear relationship to the x coordinates. We now have a list of x coordinates and a list of y coordinates, but the scatter_plot function needs a list of (x,y) tuples. The zip function takes the two lists and combines them into a single list of tuples. The scatter_plot function returns a graphics object called sp. To add a line object to the plot, we use the following syntax: sp += line([(x1, y1), (x2,y2)], color='red') The += operator is a way to increment a variable; x+=1 is a shortcut for x = x + 1. Because the + operator also combines graphics objects, this syntax can be used to add a graphics object to an existing graphics object. Time for action – plotting a list Sometimes, you need to plot a list of discrete data points. The following example might be found in an introductory digital signal processing (DSP) course. We will use lists to represent digital signals. We sample the analogue function cosine(t) at two different sampling rates, and plot the resulting digital signals. # Use list_plot to visualize digital signals # Undersampling and oversampling a cosine signal sample_times_1 = srange(0, 6*pi, 4*pi/5) sample_times_2 = srange(0, 6*pi, pi/3) data1 = [cos(t) for t in sample_times_1] data2 = [cos(t) for t in sample_times_2] plot1 = list_plot(zip(sample_times_1, data1), color='blue') plot1.axes_range(0, 18, -1, 1) plot1 += text("Undersampled", (9, 1.1), color='blue', fontsize=12) plot2 = list_plot(zip(sample_times_2, data2), color='red') plot2.axes_range(0, 18, -1, 1) plot2 += text("Oversampled", (9, 1.1), color='red', fontsize=12) g = graphics_array([plot1, plot2], 2, 1) # 2 rows, 1 column g.show(gridlines=["minor", False]) The result is as follows: What just happened? The function list_plot works a lot like scatter_plot from the previous example, so I won't explain it again. We used the method axes_range(x_min, x_max, y_min, y_max) to set the limits of the x and y axes all at once. Once again, we used the += operator to add a graphics object to an existing object. This time, we added a text annotation instead of a line. The basic syntax for adding text at a given (x,y) position is text('a string', (x,y)). To see the options that text accepts, type the following: sage: text.options {'vertical_alignment': 'center', 'fontsize': 10, 'rgbcolor': (0, 0, 1), 'horizontal_alignment': 'center', 'axis_coords': False} To display the two plots, we introduced a new function called graphics_array, which uses the basic syntax: graphics_array([plot_1, plot_2, ..., plot_n], num_rows, num_columns) This function returns another graphics object, and we used the show method to display the plots. We used the keyword argument gridlines=["minor", False] to tell Sage to display vertical lines at each of the minor ticks on the x axis. The first item in the list specifies vertical grid lines, and the second specifies horizontal grid lines. The following options can be used for either element: "major"Grid lines at major ticks"minor"Grid lines at major and minor ticksFalseNo grid lines Try playing with these options in the previous example.  
Read more
  • 0
  • 0
  • 25149
article-image-scribus-importing-images
Packt
06 Apr 2011
13 min read
Save for later

Scribus: Importing Images

Packt
06 Apr 2011
13 min read
  Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus.  Importing and exporting: The concepts To begin with, remember that there are two kinds of graphics you can add to your layout. You can have photos, generally taken from a digital camera, downloaded, or bought on some website. Photos will generally be stored in JPEG files, but you can also find PNG, TIFF, or many other file formats. The second kind of graphics is vector drawings such as logos and maps. They are computer-made drawings and are stored as EPS or SVG files. You will certainly need to work with both in most of your documents. The previous image shows the comparison between a photo on the left-hand side, and the same photo traced in vectors on the right-hand side. In the middle, see how the details of the photo are made up of square pixels that are sometimes difficult to handle to get good printing results. The first difference is that with photos, you will need to place them in a frame. There are some tips to automatically add the frame, but anyway, a frame will be needed. Vector drawings are imported as shapes and can be manipulated in Scribus like any other Scribus object. The second difference is that when working with photos, you will have to import them within Scribus. The term "import" is precise here. Most text processors don't import but insert images. In the case of "insert", the images are definitely stored in your document. So you can send this document to anyone via e-mail or can store it on any external storage device without caring about whether these images will still be present later. Scribus does it differently: it adds to the frame a reference to the imported file's own storage position. Scribus will look for the file each time it needs it, and if the file is not where it should be, problems may arise. Basically, the steps you go through while performing a DTP import and while performing a text processor insert are the same, but the global workflows are different because the professional needs to which they refer to are different. All the communication software, layout programs, website building software, as well as video editing software perform the same steps. Once you're used to it, it's really handy: it results in lighter and more adaptable files. However, while teaching Scribus, I have many times received mails from students who have trouble with the images in their documents, just because they didn't take enough care of this little difference. Remembering it will really help you to work peacefully. DTP versus text processors again Here we are talking about the default behavior of the software. As text processors can now work with frames, they often import the image as a link. Scribus itself should be able to embed pictures in its next release. Will the difference between these two pieces software one day disappear? The next difference is about the color space of the pictures. Cameras use RAW or JPEG (which are RGB documents). Offset printers, which everyone usually refers to, are based on the CMYK (Cyan, Magenta, Yellow, and Black) model. Traditionally, most proprietary packages ask the users to convert RGB files into CMYK. With Scribus there is no need to do so. You can import your graphics as RGB and Scribus will export them all as desired at the end of the process (which most of the time is exporting to PDF). So, if you use GIMP, you shouldn't be embarrassed by its default lack of CMYK support. CMYK in GIMP If you really want to import CMYK photos into Scribus, you can use the GIMP Separate plugin to convert an RGB document to a CMYK. Batch converting can be easily done using the convert utility of ImageMagick. Consider using the new CMYKTool software too, which gives some nice color test tools and options, including ink coverage. That said, we now have to see how it really works in Scribus. Importing photos To import photos, you simply have to: Create an Image Frame with the Insert Image Frame tool (I) or with a shape or polygon converted to an Image Frame. Go to File | Import | Get Image or use Ctrl + D. This is the most common way to do it. On some systems, you can even drag the picture from your folder into the frame. The thing that you should never do is copy the photo (wherever it is) and paste it into your frame. When doing this, Scribus has no knowledge of the initial picture's storage location (because the copied image is placed in a special buffer shared among the applications), and that's what it needs. There are other ways to import an image. You can use: The DirectImageImport script The Picture Browser on some very new and experimental Scribus versions The DirectImageImport script is placed in the Script | Scribus Scripts menu. It will display a window waiting for you to specify which image to import. Once you validate, Scribus automatically creates the frame and places the image within. Some people find it useful because it seems that you can save one step this way. But, in fact, the frame size is a default based on the page width, so you will have to customize it. You might have to draw it directly with a good size and it should be as easy. But anyway, you'll choose the one you prefer. Always remember that each image needs a frame. The image might be smaller or bigger than the frame. In the first case it won't fill it, and in the second case it will appear cropped. You can adapt the frame size as you want; the photo won't change at all. However, one thing you will certainly want to do is move the picture within the frame to select a better area. To do this, double-click on the frame to go into the Content Editing mode; you can now drag the picture inside to place it more precisely. The contextual menu of the Image Frame will give you two menus that will help to make the size of the image the same as that of the frame: Adjust Image to Frame will extend or reduce the photo so that it fills the frame but keeps the photo's height to width ratio Adjust Frame to Image will modify the frame size so that it fits the image In both cases, once applied, you can change the size of the frame as you need to and the photo will automatically be scaled to fit the frame size. Using such options is really interesting because it is fast and easy, but it is not without some risk. When a frame shape is changed after the image is imported, the image itself doesn't change. For example, if you use any of the skew buttons in the Node window (Edit button of the Shape tab of PP) you'll see that the frame will be skewed but not the picture itself. Image changes have to be made in an image manipulation program. Relinking photos Once your photos have been imported and they fit perfectly in your page content, you may wish to stop for a day or show this to some other people. It's always a good idea to make your decisions later, as you'll see more weaknesses a few days later. It is generally at this point that problems appear. If you send your Scribus .sla document for proofreading, the photos won't be displayed on the proofreader's screen as they are not embedded into the file. The same issue arises if you decide to move or rename some folders or photos on your own computer. Since Scribus just saves the location of your files and loads them when necessary, it won't be able to find them anymore. In this case, don't worry at all. Nothing is lost; it is just that Scribus has no idea of what you have done to your file—you will just need to tell it. At this point, you have two possibilities: Import the photos that have disappeared, again, so that Scribus can save their new location. Go to Extra | Manage Images. Show where the photos have gone by choosing the missing photo (shown with red diagonals crossing the image display area) and clicking on the Search button just below it. When relinking, you can link to another image if you wish; these images just need to have the same name to make Scribus guess it's the good one. So it will be very important that you give unique names to your photos. You can define some naming rules, or keep the camera automatic numbering. If you need to send your document to someone, you can either send a PDF that can't be modified, just annotated, or the whole Scribus document along with the photos. The best thing for this is not to compress the photo folder. It won't make the link into Scribus for they are absolute and the folder path name will differ on your reader's computer. They would need to relink every picture and that might take long for some documents. The best technique is to use the built-in File } Collect for Output window. It will ask you to create a new directory in which all the files needed for that document will be copied with relative paths, so that it will work everywhere. Compress this folder and send it. Time for action – creating a postcard As an example, let's create a postcard. It's an easy document that won't need many features and, therefore, we can concentrate on our image issues. We'll go through the dos and don'ts to let you experiment with the problems that you might have with a bad workflow. Let's create a two page, A6 landscape document with a small margin guide of 6mm. After creating the page, use Page | Manage Guides and add one vertical guide with a gap of 5mm and one horizontal guide with a gap of 20mm. In the first frame, import (File | Import | Get Image) the pict8-1.psd file. Notice that it is in a folder called Photos, with some other files that we will use in this document. When you're in the file selector, Scribus shows some file information: you can see the size in pixels (1552x2592), the resolution (72x72), and the colorspace (RGB). This image will look very big in the frame. Right-click and choose Adjust Image to Frame. It fits the height but not the width. Also we'd want the bottom of the flower image to be at the bottom of the frame. Open the Properties Palette and in the Image tab, select the Free Scaling option and change the X-scale and Y-scale up to 12 or a value close to it. Now it might be better. In the top right-hand side frame, import pict8-2.jpg and set it in the best way you can using the same procedure. Double-click on the frame and drag the picture to find the best placement. Then in the last frame of the first page, import the pict8-3.png file. You can add frame text and type something inside it, such as "Utopia through centuries", and set it nicely. On the second page we'll use one horizontal column without gap and one vertical column with a 5mm gap. On the right-hand part of the horizontal guide (use Page | Snap to Guides if you want to be sure) draw an horizontal line from one vertical to another bordering that one. Keep this line selected and go to Item | Multiple Duplicate. In the window, choose 4 copies, leave the next option selected, and define a 0.4 inch vertical gap. Here it is necessary to write the address. At the bottom left-hand corner of the same page, draw a new Image Frame in which you can import the pict8-4.tif file. In the Image properties, scale the image to the frame size and deselect Proportional so that the image fills the frame perfectly, even if it is distorted. Then in the XYZ tab of the PP, click on the Flip Horizontally button (the blue double arrow icon). We have now set our pages and the photos are placed correctly. Let's create some errors voluntarily. First, let's say we rename the Photos folder to PostcardPhotos. This must be done in your file browser; you cannot do it from Scribus. Go back to Scribus. Things might not have changed for now, but if you right-click on an image and choose Update Image you will see that it won't be displayed anymore. If you want everything to go well again, you can rename your folder again or go to Extras | Manage Images. You see there that each Image Frame of the document is listed and that they contain nothing because the photos are not found anymore. For each image selected in this window, you'll have to click on the Search button and specify which directory it has been moved into in the next window. Scribus will display the image found (image with a similar name only). Select one of the listed images and click on Select. Everything should go well now. (Move the mouse over the image to enlarge.) What just happened? After having created our page, we placed our photos inside the frames. By renaming the folder we broke the link between Scribus and the image files. The Manage Images window lets us see what happened. The full paths to the pictures are displayed here. In our example, all the pictures are in the same folder, but you could have imported pictures from several directories. In this case, only those being included in the renamed folder would have disappeared from the Scribus frames. By clicking on Search, we told Scribus what happened to those pictures and that those pictures still exist but somewhere else. Notice that if you deleted the photos they will be lost forever. The best advice would be to keep a copy of everything: the original as well as the modified photos. Notice that if your images are stored on an external device, it should be plugged and mounted. In fact, renaming the folder is not the only reason why an image might disappear. It happens when: We rename the image folder We rename any parent folder of the image folder We rename the picture We delete the picture We delete the whole folder containing the picture Giving the documents and the pictures to someone else by an e-mail or an USB-key will certainly be similar to the second case. In the first three cases, using the Manage Images window will help you find the images (it's better to know where they are). In the last two cases, you should be ready to look for new pictures and import them into the frames.  
Read more
  • 0
  • 1
  • 25097

article-image-julia-co-creator-jeff-bezanson-on-whats-wrong-with-julialang-and-how-to-tackle-issues-like-modularity-and-extension
Vincy Davis
08 Aug 2019
5 min read
Save for later

Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension

Vincy Davis
08 Aug 2019
5 min read
The Julia language, which has been touted as the new fastest-growing programming language, held its 6th Annual JuliaCon 2019, between July 22nd to 26th at Baltimore, USA. On the fourth day of the conference, the co-creator of Julia language and the co-founder of Julia computing, Jeff Bezanson, gave a talk explaining “What’s bad about Julia”. Firstly, Bezanson states a disclaimer that he’s mentioning only those bad things in Julia which he is currently aware of. Next, he begins by listing many popular issues with the programming language. What’s wrong with Julia Compiler latency: Compiler latency has been one of the high priority issues in Julia. It is a lot slower when compared to other languages like Python(~27x slower) or C( ~187x slower). Static compilation support: Of Course, Julia can be compiled. Unlike the language C which is compiled before execution, Julia is compiled at runtime. Thus Julia provides poor support for static compilation. Immutable arrays: Many developers have contributed immutable array packages, however,  many of these packages assume mutability by default, resulting in more work for users. Thus Julia users have been requesting better support for immutable arrays. Mutation issues: This is a common stumbling block for Julia developers as many complain that it is difficult to identify which package is safe to mutate. Array optimizations: To get good performance, Julia users have to use manually in-place operations to get high performance array code. Better traits: Users have been requesting more traits in Julia, to avoid the big unions of listing all the examples of a type, instead of adding a declaration. This has been a big issue in array code and linear algebra. Incomplete notations: Many codes in Julia have incomplete notations. For eg. N-d array Many members from the audience agreed with Bezanson’s list and appreciated his frank efforts in accepting the problems in Julia. In this talk, Bezanson opts to explore two not-so-popular Julia issues - modularity and extension. He says that these issues are weird and worrisome to even him. How to tackle modularity and extension issues in Julia A typical Julia module extends functions from another module. This helps users in composing many things and getting lots of new functionality for free. However, what if a user wants a separately compiled module, which would be completely sealed, predictable, and will need less  time to compile, like an isolated module. Bezanson starts illustrating how the two issues of modularity and extension can be avoided in Julia code. Firstly, he starts by using two unrelated packages, which can communicate to each other by using extension in another base package. This scenario, he states, is common when used in a core module, which requires few primitives like any type, int type, and others. The two packages in a core module are called Core.Compiler and base, with each having their own definitions. The two packages have some codes which are common among them, thus it requires the user to write the same code twice in both the packages, which Bezanson think is “fine”. The more intense problem, Bezanson says is the typeof present in the core module. As both these packages needs to define constructors for their own types, it is not possible to share these constructors. This means that, except for constructors, everything else is isolated among the two packages. He adds that, “In practice, it doesn’t really matter because the types are different, so they can be distinguished just fine, but I find it annoying that we can’t sort of isolate those method tables of the constructors. I find it kind of unsatisfying that there’s just this one exception.” Bezanson then explains how Types can be described using different representations and extensions. Later, Bezanson provides two rules to tackle method specificity issues in Julia. The first rule is to be more specific, i.e., if it is a strict subtype (<:,not==) of another signature. According to Bezanson, the second rule is that it cannot be avoided. If methods overlap in arguments and have no specificity relationship, then “users have to give an ambiguity error”. Bezanson says that thus users can be on the safer side and assume that things do overlap. Also, if two signatures are similar, “then it does not matter which signature is called”, adds Bezanson. Finally, after explaining all the workarounds with regard to the said issues, Bezanson concludes that “Julia is not that bad”. And states that the “Julia language could be alot better and the team is trying their best to tackle all the issues.” Watch the video below to check out all the illustrations demonstrated by Bezanson during his talk. https://www.youtube.com/watch?v=TPuJsgyu87U Julia users around the world have loved Bezanson’s honest and frank talk at the JuliaCon 2019. https://twitter.com/MoseGiordano/status/1154371462205231109 https://twitter.com/johnmyleswhite/status/1154726738292891648 Read More Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Creating a basic Julia project for loading and saving data [Tutorial]
Read more
  • 0
  • 0
  • 24823

article-image-optimization-python
Packt
19 Aug 2015
14 min read
Save for later

Optimization in Python

Packt
19 Aug 2015
14 min read
The path to mastering performance in Python has just started. Profiling only takes us half way there. Measuring how our program is using the resources at its disposal only tells us where the problem is, not how to fix it. In this article by Fernando Doglio, author of the book Mastering Python High Performance, we will cover the process of optimization, and to do that, we need to start with the basics. We'll keep it inside the language for now: no external tools, just Python and the right way to use it. We will cover the following topics in this article: Memoization / lookup tables Usage of default arguments (For more resources related to this topic, see here.) Memoization / lookup tables This is one of the most common techniques used to improve the performance of a piece of code (namely a function). We can save the results of expensive function calls associated to a specific set of input values and return the saved result (instead of redoing the whole computation) when the function is called with the remembered input. It might be confused with caching, since it is one case of it, although this term refers also, to other types of optimization (such as HTTP caching, buffering, and so on) This methodology is very powerful, because in practice, it'll turn what should have been a potentially very expensive call into a O(1) function call if the implementation is right. Normally, the parameters are used to create a unique key, which is then used on a dictionary to either save the result or obtain it if it's been already saved. There is, of course, a trade-off to this technique. If we're going to be remembering the returned values of a memoized function, then we'll be exchanging memory space for speed. This is a very acceptable trade-off, unless the saved data becomes more than what the system can handle. Classic use cases for this optimization are function calls that repeat the input parameters often. This will assure that most of the times, the memoized results are returned. If there are many function calls but with different parameters, we'll only store results and spend our memory without any real benefit, as seen in the following diagram: You can clearly see how the blue bar (Fixed params, memoized) is clearly the fastest use case, while the others are all similar due to their nature. Here is the code that generates values for the preceding chart. To generate some sort of time-consuming function, the code will call either the twoParams function or the twoParamsMemoized function several hundred times under different conditions, and it will log the execution time: import math import time import random class Memoized: def __init__(self, fn): self.fn = fn self.results = {} def __call__(self, *args): key = ''.join(map(str, args[0])) try: return self.results[key] except KeyError: self.results[key] = self.fn(*args) return self.results[key] @Memoized def twoParamsMemoized(values, period): totalSum = 0 for x in range(0, 100): for v in values: totalSum = math.pow((math.sqrt(v) * period), 4) + totalSum return totalSum def twoParams(values, period): totalSum = 0 for x in range(0, 100): for v in values: totalSum = math.pow((math.sqrt(v) * period), 4) + totalSum return totalSum def performTest(): valuesList = [] for i in range(0, 10): valuesList.append(random.sample(xrange(1, 101), 10)) start_time = time.clock() for x in range(0, 10): for values in valuesList: twoParamsMemoized(values, random.random()) end_time = time.clock() - start_time print "Fixed params, memoized: %s" % (end_time) start_time = time.clock() for x in range(0, 10): for values in valuesList: twoParams(values, random.random()) end_time = time.clock() - start_time print "Fixed params, without memoizing: %s" % (end_time) start_time = time.clock() for x in range(0, 10): for values in valuesList: twoParamsMemoized(random.sample(xrange(1,2000), 10), random.random()) end_time = time.clock() - start_time print "Random params, memoized: %s" % (end_time) start_time = time.clock() for x in range(0, 10): for values in valuesList: twoParams(random.sample(xrange(1,2000), 10), random.random()) end_time = time.clock() - start_time print "Random params, without memoizing: %s" % (end_time) performTest() The main insight to take from the preceding chart is that just like with every aspect of programming, there is no silver bullet algorithm that will work for all cases. Memoization is clearly a very basic way of optimizing code, but clearly, it won't optimize anything given the right circumstances. As for the code, there is not much to it. It is a very simple, non real-world example of the point I was trying to send across. The performTest function will take care of running a series of 10 tests for every use case and measure the total time each use case takes. Notice that we're not really using profilers at this point. We're just measuring time in a very basic and ad-hoc way, which works for us. The input for both functions is simply a set of numbers on which they will run some math functions, just for the sake of doing something. The other interesting bit about the arguments is that, since the first argument is a list of numbers, we can't just use the args parameter as a key inside the Memoized class' methods. This is why we have the following line: key = ''.join(map(str, args[0])) This line will concatenate all the numbers from the first parameter into a single value, which will act as the key. The second parameter is not used here, because it's always random, which would imply that the key would never be the same. Another variation of the preceding method is to pre-calculate all values from the function (assuming we have a limited number of inputs of course) during initialization and then refer to the lookup table during execution. This approach has several preconditions: The number of input values must be finite; otherwise it's impossible to precalculate everything The lookup table with all of its values must fit into memory Just like before, the input must be repeated, at least once, so the optimization both makes sense and is worth the extra effort There are different approaches when it comes to architecting the lookup table, all offering different types of optimizations. It all depends on the type of application and solution that you're trying to optimize. Here is a set of examples. Lookup on a list or linked list This solution works by iterating over an unsorted list and checking the key against each element, with the associated value as the result we're looking for. This is obviously a very slow method of implementation, with a big O notation of O(n) for both the average and worst case scenarios. Still, given the right circumstances, it could prove to be faster than calling the actual function every time. In this case, using a linked list would improve the performance of the algorithm over using a simple list. However, it would still depend heavily on the type of linked list it is (doubly linked list, simple linked list with direct access to the first and last elements, and so on). Simple lookup on a dictionary This method works using a one-dimensional dictionary lookup, indexed by a key consisting of the input parameters (enough of them create a unique key). In particular cases (like we covered earlier), this is probably one of the fastest lookups, even faster than binary search in some cases with a constant execution time (big O notation of O(1)). Note that this approach is efficient as long as the key-generation algorithm is capable of generating unique keys every time. Otherwise, the performance could degrade over time due to the many collisions on the dictionaries. Binary search This particular method is only possible if the list is sorted. This could potentially be an option depending on the values to sort. Yet, sorting them would require an extra effort that would hurt the performance of the entire effort. However, it presents very good results even in long lists (average big O notation of O(log n)). It works by determining in which half of the list the value is and repeating until either the value is found or the algorithm is able to determine that the value is not in the list. To put all of this into perspective, looking at the Memoized class mentioned earlier, it implements a simple lookup on a dictionary. However, this would be the place to implement either of the other algorithms. Use cases for lookup tables There are some classic example use cases for this type of optimization, but the most common one is probably the optimization of trigonometric functions. Based on the computing time, these functions are really slow. When used repeatedly, they can cause some serious damage to your program's performance. This is why it is normally recommended to precalculate the values of these functions. For functions that deal with an infinite domain universe of possible input values, this task becomes impossible. So, the developer is forced to sacrifice accuracy for performance by precalculating a discrete subdomain of the possible input values (that is, going from floating points down to integer numbers). This approach might not be ideal in some cases, since some systems require both performance and accuracy. So, the solution is to meet in the middle and use some form of interpolation to calculate the wanted value, based on the ones that have been precalculated. It will provide better accuracy. Even though it won't be as performant as using the lookup table directly, it should prove to be faster than doing the trigonometric calculation every time. Let's look at some examples of this, for instance, for the following trigonometric function: def complexTrigFunction(x): return math.sin(x) * math.cos(x)**2 We'll take a look at how simple precalculation won't be accurate enough and how some form of interpolation will result in a better level of accuracy. The following code will precalculate the values for the function on a range from -1000 to 1000 (only integer values). Then, it'll try to do the same calculation (only for a smaller range) for floating point numbers: import math import time from collections import defaultdict import itertools trig_lookup_table = defaultdict(lambda: 0) def drange(start, stop, step): assert(step != 0) sample_count = math.fabs((stop - start) / step) return itertools.islice(itertools.count(start, step), sample_count) def complexTrigFunction(x): return math.sin(x) * math.cos(x)**2 def lookUpTrig(x): return trig_lookup_table[int(x)] for x in range(-1000, 1000): trig_lookup_table[x] = complexTrigFunction(x) trig_results = [] lookup_results = [] init_time = time.clock() for x in drange(-100, 100, 0.1): trig_results.append(complexTrigFunction(x)) print "Trig results: %s" % (time.clock() - init_time) init_time = time.clock() for x in drange(-100, 100, 0.1): lookup_results.append(lookUpTrig(x)) print "Lookup results: %s" % (time.clock() - init_time) for idx in range(0, 200): print "%st%s" % (trig_results [idx], lookup_results[idx]) The results from the preceding code will help demonstrate how the simple lookup table approach is not accurate enough (see the following chart). However, it compensates for it on speed since the original function takes 0.001526 seconds to run while the lookup table only takes 0.000717 seconds. The preceding chart shows how the lack of interpolation hurts the accuracy. You can see how even though both plots are quite similar, the results from the lookup table execution aren't as accurate as the trig function used directly. So, now, let's take another look at the same problem. However, this time, we'll add some basic interpolation (we'll limit the rage of values from -PI to PI): import math import time from collections import defaultdict import itertools trig_lookup_table = defaultdict(lambda: 0) def drange(start, stop, step): assert(step != 0) sample_count = math.fabs((stop - start) / step) return itertools.islice(itertools.count(start, step), sample_count) def complexTrigFunction(x): return math.sin(x) * math.cos(x)**2 reverse_indexes = {} for x in range(-1000, 1000): trig_lookup_table[x] = complexTrigFunction(math.pi * x / 1000) complex_results = [] lookup_results = [] init_time = time.clock() for x in drange(-10, 10, 0.1): complex_results .append(complexTrigFunction(x)) print "Complex trig function: %s" % (time.clock() - init_time) init_time = time.clock() factor = 1000 / math.pi for x in drange(-10 * factor, 10 * factor, 0.1 * factor): lookup_results.append(trig_lookup_table[int(x)]) print "Lookup results: %s" % (time.clock() - init_time) for idx in range(0, len(lookup_results )): print "%st%s" % (complex_results [idx], lookup_results [idx]) As you might've noticed in the previous chart, the resulting plot is periodic (specially because we've limited the range from -PI to PI). So, we'll focus on a particular range of values that will generate one single segment of the plot. The output of the preceding script also shows that the interpolation solution is still faster than the original trigonometric function, although not as fast as it was earlier: Interpolation Solution Original function 0.000118 seconds 0.000343 seconds The following chart is a bit different from the previous one, especially because it shows (in green) the error percentage between the interpolated value and the original one: The biggest error we have is around 12 percent (which represents the peaks we see on the chart). However, it's for the smallest values, such as -0.000852248551417 versus -0.000798905501416. This is a case where the error percentage needs to be contextualized to see if it really matters. In our case, since the values related to that error are so small, we can ignore that error in practice. There are other use cases for lookup tables, such as in image processing. However, for the sake of this article, the preceding example should be enough to demonstrate their benefits and trade-off implied in their usage. Usage of default arguments Another optimization technique, one that is contrary to memoization, is not particularly generic. Instead, it is directly tied to how the Python interpreter works. Default arguments can be used to determine values once at function creation time instead of at run time. This can only be done for functions or objects that will not be changed during program execution. Let's look at an example of how this optimization can be applied. The following code shows two versions of the same function, which does some random trigonometric calculation: import math #original function def degree_sin(deg): return math.sin(deg * math.pi / 180.0) #optimized function, the factor variable is calculated during function creation time, #and so is the lookup of the math.sin method. def degree_sin(deg, factor=math.pi/180.0, sin=math.sin): return sin(deg * factor) This optimization can be problematic if not correctly documented. Since it uses attributes to precompute terms that should not change during the program's execution, it could lead to the creation of a confusing API. With a quick and simple test, we can double-check the performance gain from this optimization: import time import math def degree_sin(deg): return math.sin(deg * math.pi / 180.0) * math.cos(deg * math.pi / 180.0) def degree_sin_opt(deg, factor=math.pi/180.0, sin=math.sin, cos = math.cos): return sin(deg * factor) * cos(deg * factor) normal_times = [] optimized_times = [] for y in range(100): init = time.clock() for x in range(1000): degree_sin(x) normal_times.append(time.clock() - init) init = time.clock() for x in range(1000): degree_sin_opt(x) optimized_times.append(time.clock() - init) print "Normal function: %s" % (reduce(lambda x, y: x + y, normal_times, 0) / 100) print "Optimized function: %s" % (reduce(lambda x, y: x + y, optimized_times, 0 ) / 100) The preceding code measures the time it takes for the script to finish each of the versions of the function to run its code 1000 times. It saves those measurements, and finally, it does an average for each case. The result is displayed in the following chart: It clearly isn't an amazing optimization. However, it does shave off some microseconds from our execution time, so we'll keep it in mind. Just remember that this optimization could cause problems if you're working as part of an OS developer team. Summary In this article, we covered several optimization techniques. Some of them are meant to provide big boosts on speed, save memory. Some of them are just meant to provide minor speed improvements. Most of this article covered Python-specific techniques, but some of them can be translated into other languages as well. Resources for Article: Further resources on this subject: How to do Machine Learning with Python [article] The Essentials of Working with Python Collections [article] Symbolizers [article]
Read more
  • 0
  • 0
  • 24763
article-image-polyglot-programming-allows-developers-to-choose-the-right-language-to-solve-tough-engineering-problems
Richard Gall
11 Jun 2019
9 min read
Save for later

Polyglot programming allows developers to choose the right language to solve tough engineering problems

Richard Gall
11 Jun 2019
9 min read
Programming languages can divide opinion. They are, for many engineers, a mark of identity. Yes, they say something about the kind of work you do, but they also say something about who you are and what you value. But this is changing, with polyglot programming becoming a powerful and important trend. We’re moving towards a world in which developers are no longer as loyal to their chosen programming languages as they were. Instead, they are more flexible and open minded about the languages they use. This year’s Skill Up report highlights that there are a number of different drivers behind the programming languages developers use which, in turn, imply a level of contextual decision making. Put simply, developers today are less likely to stick with a specific programming language, and instead move between them depending on the problems they are trying to solve and the tasks they need to accomplish. Download this year's Skill Up report here. [caption id="attachment_28338" align="aligncenter" width="554"] Skill Up 2019 data[/caption] As the data above shows, languages aren’t often determined by organizational requirements. They are more likely to be if you’re primarily using Java or C#, but that makes sense as these are languages that have long been associated with proprietary software organizations (Oracle and Microsoft respectively); in fact, programming languages are often chosen due to projects and use cases. The return to programming language standardization This is something backed up by the most recent ThoughtWorks Radar, published in April. Polyglot programming finally moved its way into the Adopt ‘quadrant’. This is after 9 years of living in the Trial quadrant. Part of the reason for this, ThoughtWorks explains, is that the organization is seeing a reaction against this flexibility, writing that “we're seeing a new push to standardize language stacks by both developers and enterprises.” The organization argues - quite rightly - that , “promoting a few languages that support different ecosystems or language features is important for both enterprises to accelerate processes and go live more quickly and developers to have the right tools to solve the problem at hand.” Arguably, we’re in the midst of a conflict within software engineering. On the one hand the drive to standardize tooling in the face of increasingly complex distributed systems makes sense, but it’s one that we should resist. This level of standardization will ultimately remove decision making power from engineers. What’s driving polyglot programming? It’s probably worth digging a little deeper into why developers are starting to be more flexible about the languages they use. One of the most important drivers of this change is the dominance of Agile as a software engineering methodology. As Agile has become embedded in the software industry, software engineers have found themselves working across the stack rather than specializing in a specific part of it. Full-stack development and polyglot programming This is something suggested by Stack Overflow survey data. This year 51.9% of developers described themselves as full-stack developers compared to 50.0% describing themselves as backend developers. This is a big change from 2018 where 57.9% described themselves as backend developers compared to 48.2% of respondents calling themselves full-stack developers. Given earlier Stack Overflow data from 2016 indicates that full-stack developers are comfortable using more languages and frameworks than other roles, it’s understandable that today we’re seeing developers take more ownership and control over the languages (and, indeed, other tools) they use. With developers sitting in small Agile teams working more closely to problem domains than they may have been a decade ago, the power is now much more in their hands to select and use the programming languages and tools that are most appropriate. If infrastructure is code, more people are writing code... which means more people are using programming languages But it's not just about full-stack development. With infrastructure today being treated as code, it makes sense that those responsible for managing and configuring it - sysadmins, SREs, systems engineers - need to use programming languages. This is a dramatic shift in how we think about system administration and infrastructure management; programming languages are important to a whole new group of people. Python and polyglot programming The popularity of Python is symptomatic of this industry-wide change. Not only is it a language primarily selected due to use case (as the data above shows), it’s also a language that’s popular across the industry. When we asked our survey respondents what language they want to learn next, Python came out on top regardless of their primary programming language. [caption id="attachment_28340" align="aligncenter" width="563"] Skill Up 2019 data[/caption] This highlights that Python has appeal across the industry. It doesn’t fit neatly into a specific job role, it isn’t designed for a specific task. It’s flexible - as developers today need to be. Although it’s true that Python’s popularity is being driven by machine learning, it would be wrong to see this as the sole driver. It is, in fact, its wide range of use cases ranging from scripting to building web services and APIs that is making Python so popular. Indeed, it’s worth noting that Python is viewed as a tool as much as it is a programming language. When we specifically asked survey respondents what tools they wanted to learn, Python came up again, suggesting it occupies a category unlike every other programming language. [caption id="attachment_28341" align="aligncenter" width="585"] Skill Up 2019 data[/caption] What about other programming languages? The popularity of Python is a perfect starting point for today’s polyglot programmer. It’s relatively easy to learn, and it can be used for a range of different tasks. But if we’re to convincingly talk about a new age of programming, where developers are comfortable using multiple programming languages, we have to look beyond the popularity of Python at other programming languages. Perhaps a good way to do this is to look at the languages developers primarily using Python want to learn next. If you look at the graphic above, there’s no clear winner for Python developers. While every other language is showing significant interest in Python, Python developers are looking at a range of different languages. This alone isn’t evidence of the popularity of polyglot programming, but it does indicate some level of fragmentation in the programming language ‘marketplace’. Or, to put it another way, we’re moving to a place where it becomes much more difficult to say that given languages are definitive in a specific field. The popularity of Golang Go has particular appeal for Python programmers with almost 20% saying they want to learn it next. This isn’t that surprising - Go is a flexible language that has many applications, from microservices to machine learning, but most importantly can give you incredible performance. With powerful concurrency, goroutines, and garbage collection, it has features designed to ensure application efficiency. Given it was designed by Google this isn’t that surprising - it’s almost purpose built for software engineering today. It’s popularity with JavaScript developers further confirms that it holds significant developer mindshare, particularly among those in positions where projects and use cases demand flexibility. Read next: Is Golang truly community driven and does it really matter? A return to C++ An interesting contrast to the popularity of Go is the relative popularity of C++ in our Skill Up results. C++ is ancient in comparison to Golang, but it nevertheless seems to occupy a similar level of developer mindshare. The reasons are probably similar - it’s another language that can give you incredible power and performance. For Python developers part of the attraction is down to its usefulness for deep learning (TensorFlow is written in C++). But more than that, C++ is also an important foundational language. While it isn’t easy to learn, it does help you to understand some of the fundamentals of software. From this perspective, it provides a useful starting point to go on and learn other languages; it’s a vital piece that can unlock the puzzle of polyglot programming. A more mature JavaScript JavaScript also came up in our Skill Up survey results. Indeed, Python developers are keen on the language, which tells us something about the types of tasks Python developers are doing as well as the way JavaScript has matured. On the one hand, Python developers are starting to see the value of web-based technologies, while on the other JavaScript is also expanding in scope to become much more than just a front end programming language. Read next: Is web development dying? Kotlin and TypeScript The appearance of other smaller languages in our survey results emphasises the way in which the language ecosystem is fragmenting. TypeScript, for example, may not ever supplant JavaScript, but it could become an important addition to a developer’s skill set if they begin running into problems scaling JavaScript. Kotlin represents something similar for Java developers - indeed, it could even eventually out pace its older relative. But again, it’s popularity will emerge according to specific use cases. It will begin to take hold in particular where Java’s limitations become more exposed, such as in modern app development. Rust: a goldilocks programming language perfect for polyglot programming One final mention deserves to go to Rust. In many ways Rust’s popularity is related to the continued relevance of C++, but it offers some improvements - essentially, it’s easier to leverage Rust, while using C++ to its full potential requires experience and skill. Read next: How Deliveroo migrated from Ruby to Rust without breaking production One commenter on Hacker News described it as a ‘Goldilocks’ language - “It's not so alien as to make it inaccessible, while being alien enough that you'll learn something from it.” This is arguably what a programming language should be like in a world where polyglot programming rules. It shouldn’t be so complex as to consume your time and energy, but it should also be sophisticated enough to allow you to solve difficult engineering problems. Learning new programming languages makes it easier to solve engineering problems The value of learning multiple programming languages is indisputable. Python is the language that’s changing the game, becoming a vital additional extra to a range of developers from different backgrounds, but there are plenty of other languages that could prove useful. What’s ultimately important is to explore the options that are available and to start using a language that’s right for you. Indeed, that’s not always immediately obvious - but don’t let that put you off. Give yourself some time to explore new languages and find the one that’s going to work for you.
Read more
  • 0
  • 0
  • 24377

article-image-openjdk-project-valhalla-is-ready-for-developers-working-in-building-data-structures-or-compiler-runtime-libraries
Vincy Davis
01 Aug 2019
4 min read
Save for later

OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries

Vincy Davis
01 Aug 2019
4 min read
This year the JVM Language Summit 2019 was held on July 29th – 31st at Santa Clara, California. On the first day of the Summit, Oracle Java language architect Brian Goetz gave a talk on updates to OpenJDK Project Valhalla. He shared details on its progress, challenges being faced and what to expect from Project Valhalla in the future. He also talked about the significance of Project Valhalla’s LW2 phase which was released earlier last month.  OpenJDK Project Valhalla is now ready for developers to use for early-adopter experimentation in data structures and language runtimes, concluded Goetz. The main goal of OpenJDK Project Valhalla is to reboot the Java Virtual Machine (JVM) relationship with data and memory and in particular to enable denser and flatter layouts of object graphs in memory. The major restriction in the development of object layout has been the object identity. Object identity enables mutability, layout polymorphism and locking among others. As all objects do not need object identity and it would be impractical to determine whether an identity is relevant or not, Goetz expects programmers to inform about the whereabouts of a class such that it will make it easier to make a broader range of assumptions about it. Who cares about Value types? Goetz believes that value types are important for many applications and writers who desire a better control of memory layout and like to use memory very wisely. He says that library writers would always prefer Value types as it allows them to use all the traditional abstracts without paying the runtime cost of taking an extra indirection every time somebody uses a particular abstraction. Thus library classes like optional or cursors or better numerix do not have to pay the object tax.  Similarly, compiler writers of non-Java languages use Value types as an efficient substrate for language features like tuples, multiple return, built-in numeric types and wrapped native resources. Thus both library writers and compiler writers and their users pay the object tax.  Value types, in a nutshell, can help programmers make their code run faster.  Erased and specialized generics Currently, OpenJDK Project Valhalla uses erased generics and will eventually have specialized generics. In an erased generics, Valhalla uses the knowable type convention where the erased list of values can be called as Foo<V?>. This will also be moved to specialized generics later on. He also adds that this syntax cannot be used as of now, as the Valhalla team still does not have existing utterances of Foo for spontaneously changing their meaning. Goetz hopes that the migration of generic classes like Array List<T> to specialized generics would be painless.  New top types Project Valhalla needs new top types RefObject and ValObject for references and values as types are used to indicate a programmer’s intent. It helps the object model reflect the new reality, as everything is an object, but every object does not need an identity. There are many benefits of implementing ref-ness and val-ness into the type system such as: Dynamically ask x instanceof Ref object Statically constrain method parameters or return values Restrict type parameters Natural place to hang ref- or val-specific behavior Nullity Nullity is labelled as one of the most controversial issues in Valhalla. As many values use all their bit patterns, Nullity interferes with a number of useful optimizations. On the other hand, if some types are migrated towards values, the existing code will assume nullability. Nullity is expected to be a focus of the L3 investigation. What to expect next in Project Valhalla Lastly, Goetz announces that developers building data structures or compiler runtime libraries can start using Project Valhalla. He also adds that the Project Valhalla team is working hard to validate the current programming model by working on quantifying the costs of equality, covariance, etc and is trying to better the user control experience.  Goetz concluded by stating that OpenJDK Project Valhalla is at an inflection point and is trying to figure out Nullity, Migration, specialized generics and support for Graal in the future builds. You can watch the full talk of Brian Goetz for more details. Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Brian Goetz on Java futures at FOSDEM 2019
Read more
  • 0
  • 0
  • 24101
Modal Close icon
Modal Close icon