Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Languages

202 Articles
article-image-the-golang-team-has-started-working-on-go-2-proposals
Prasad Ramesh
30 Nov 2018
4 min read
Save for later

The Golang team has started working on Go 2 proposals

Prasad Ramesh
30 Nov 2018
4 min read
Yesterday, Google engineer Robert Griesemer published a blog post highlighting the outline of the next steps for Golang towards the Go 2 release. Google developer Russ Cox started the thought process behind Go 2 in his talk at GopherCon 2017. The talk was about the future of Go and pertaining to the changes that were talked about, the talk was informally called Go 2. A major change between the two versions is in the way design and changes are influenced. The first version only involved a small team but the second version will have much more participation from the community. The proposal process started in 2015, the Go core team will now work in the proposals for the second version of the programming language. The current status of Go 2 proposals As of November 2018, there are about 120 open issues on GitHub labeled Go 2 proposal. Most of them revolve around significant language or library changes often not compatible with Go 1. The ideas from the proposals will probably influence the language and libraries of the second version. Now there are millions of Go programmers and a large Go code body that needs to be brought together without an ecosystem split. Hence the changes done need to be less and carefully selected. To do this, the Go core team is implementing a proposal evaluation process for significant potential changes. The proposal evaluation process The purpose of the evaluation process is to collect feedback on a small number of select proposals to make a final decision. This process runs in parallel to a release cycle and has five steps. Proposal selection: The Go core team selects a few Go 2 proposals that seem good to them for acceptance. Proposal feedback: After selecting, the Go team announces the selected proposals and collects feedback from the community. This gives the large community an opportunity to make suggestions or express concerns. Implementation: The proposals are implemented based on the feedback received. The goal is to have significant changes ready to submit on the first day up an upcoming release. Implementation feedback: The Go team and community have a chance to experiment with the new features during the development cycle. This helps in getting further feedback. Final launch decision: The Go team makes the final decision on shipping each change at the end of the three-month development cycle. At this time, there is an opportunity to consider if the change delivers the expected benefits or has created any unexpected costs. When shipped, the changes become a part of the Go language. Proposal selection process and the selected proposals For a proposal to be selected, the minimum criteria are that it should: address an important issue for a large number of users have a minimal impact on other users is drafted with a clear and well-understood solution For trials a select few proposals will be implemented that are backward compatible and hence are less likely to break existing functionality. The proposals are: General Unicode identifiers based on Unicode TR31 which will allow using non-Western alphabets. Adding binary integer literals and support for_ in number literals. Not a very big problem solving change, but this brings Go up to par with other languages in this aspect. Permit signed integers as shift counts. This will clean up the code and get shift expressions better in sync with index expressions and built-in functions like cap and len. The Go team has now started with the proposal evaluation process and now the community can provide feedback. Proposals with clear, positive feedback will be taken ahead as they aim to implement changes by  February 1, 2019. The development cycle is Feb-May 2019 and the chosen features will be implemented as per the outlined process. For more details, you can visit the Go Blog. Golang just celebrated its ninth anniversary GoCity: Turn your Golang program into a 3D city Golang plans to add a core implementation of an internal language server protocol
Read more
  • 0
  • 0
  • 13095

article-image-storm-2-0-0-releases-with-java-enabled-architecture-new-core-and-streams-api-and-more
Vincy Davis
03 Jun 2019
4 min read
Save for later

Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more

Vincy Davis
03 Jun 2019
4 min read
Last week, Apache Storm PMC announced the release of Storm 2.0.0. The major highlight of this release is that Storm has been re-architected in pure Java. Previously a large part of Storm's core functionality was implemented in Clojure. This release also includes significant improvements in terms of performance, a new stream API, windowing enhancements, and Kafka integration changes. New Architecture Implemented in Java With this release, Storm has been re-architected, with its core functionality implemented in pure Java. This new implementation has improved its performance significantly and also has made internal APIs more maintainable and extensible. The previous language Clojure often posed a barrier for entry to new contributors. Storm's codebase will be now more accessible to developers who don't want to learn Clojure in order to contribute. New High-Performance Core Storm 2.0.0 has a new core featuring a leaner threading model, a blazing fast messaging subsystem and a lightweight back pressure model. This has been designed to push boundaries on throughput, latency, and energy consumption while maintaining backward compatibility. Also, this makes Storm 2.0, the first streaming engine to break the 1-microsecond latency barrier. New Streams API This version has a new typed API, which will express streaming computations more easily, using functional style operations. It builds on top of the Storm's core spouts and bolt APIs and automatically fuses multiple operations together. This will help in optimizing the pipeline. Windowing Enhancements Storm 2.0.0's windowing API can now save/restore the window state to the configured state backend. This will enable larger continuous windows to be supported. Also, the window boundaries can now be accessed via the APIs. Improvements in Kafka Kafka Integration Changes Removal of Storm-Kafka Due to Kafka's deprecation of the underlying client library, the storm-kafka module has been removed. Users will have to move, to the storm-kafka-client module. This uses Kafka's ‘kafka-clients’ library for integration. Move to Using the KafkaConsumer.assign API Kafka's own mechanism which was used in Storm 1.x has been removed entirely in 2.0.0. The storm-kafka-client subscription interface has also been removed, due to the limited control it offered over the subscription behavior. It has been replaced with the ‘TopicFilter’ and ‘ManualPartitioner’ interfaces. For custom subscription users, head over to the storm-kafka-client documentation, which describes how to customize assignment. Other Kafka Highlights The KafkaBolt now allows you to specify a callback that will be called when a batch is written to Kafka. The FirstPollOffsetStrategy behavior has been made consistent between the non-Trident and Trident spouts. Storm-kafka-client now has a transactional non-opaque Trident spout. Users have also been notified that the 1.0.x version line will no longer be maintained and have strongly encouraged users to upgrade to a more recent release. The Java 7 support has also been dropped, and Storm 2.0.0 requires Java 8. There has been a mixed reaction from users over the changes, in Storm 2.0.0. Few users are not happy with Apache dropping the Clojure language. As a user on Hacker News comments, “My team has been using Clojure for close to a decade, and we found the opposite to be the case. While the pool of applicants is smaller, so is the noise ratio. Clojure being niche means that you get people who are willing to look outside the mainstream, and are typically genuinely interested in programming. In case of Storm, Apache commons is run by Java devs who have zero interest in learning Clojure. So, it's not surprising they would rewrite Storm in their preferred language.” Some users think that this move of dropping Clojure language shows that developers nowadays are unwilling to learn new things As a user on Hacker News comments, “There is a false cost assigned to learning a language. Developers are too unwilling to even try stepping beyond the boundaries of the first thing they learned. The cost is always lower than they may think, and the benefits far surpassing what they may think. We've got to work at showing developers those benefits early; it's as important to creating software effectively as any other engineer's basic toolkit.” Others are quite happy with Storm getting Java enabled. A user on Reddit said, “To me, this makes total sense as the project moved to Apache. Obviously, much more people will be able to consider contributing when it's in Java. Apache goal is sustainability and long-term viability, and Java would work better for that.” To download the Storm 2.0.0 version, visit the Storm downloads page. Walkthrough of Storm UI Storing Apache Storm data in Elasticsearch Getting started with Storm Components for Real Time Analytics
Read more
  • 0
  • 0
  • 13043

article-image-llvms-clang-9-0-to-ship-with-experimental-support-for-opencl-c17-asm-goto-initial-support-and-more
Bhagyashree R
17 Sep 2019
2 min read
Save for later

LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support, and more

Bhagyashree R
17 Sep 2019
2 min read
The stable release of LLVM 9.0 is expected to come in the next few weeks along with subprojects like Clang 9.0. As per the release notes, the upcoming Clang 9.0 release will come with experimental support for C++17 features in OpenCL, asm goto support, and much more. Read also: LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more What’s new coming in Clang 9.0.0 Experimental support for C++17 features in OpenCL Clang 9.0.0 will have experimental support for C++17 features in OpenCL. The experimental support includes improved address space behavior in the majority of C++ features. There is support for OpenCL-specific types such as images, samplers, events, and pipes. Also, the invoking of global constructors from the host side is possible using a specific, compiler-generated kernel. C language updates in Clang Clang 9.0.0 includes the __FILE_NAME__ macro as a Clang specific extension that is supported in all C-family languages. It is very similar to the __FILE__ macro except that it will always provide the last path component when possible. Another C language-specific update is the initial support for asm goto statements to control flow from inline assembly to labels. This construct will be mainly used by the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Building Linux kernels with Clang 9.0 With the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The team adds, “The Android and ChromeOS Linux distributions have moved to building their Linux kernels with Clang, and Google is currently testing Clang built kernels for their production Linux kernels.” Read also: Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel Build system changes Previously, the install-clang-headers target used to install clang’s resource directory headers. With Clang 9.0, this installation will be done by the install-clang-resource-headers target. “Users of the old install-clang-headers target should switch to the new install-clang-resource-headers target. The install-clang-headers target now installs clang’s API headers (corresponding to its libraries), which is consistent with the install-llvm-headers target,” the release notes read. To know what else is coming in Clang 9.0, check out its official release notes. Other news in Programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices
Read more
  • 0
  • 0
  • 13029

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 12966

article-image-rust-and-web-assembly-announce-wasm-bindgen-0-2-16-and-the-first-release-of-wasm-bindgen-futures
Savia Lobo
14 Aug 2018
3 min read
Save for later

Rust and Web Assembly announce ‘wasm-bindgen 0.2.16’ and the first release of ‘wasm-bindgen-futures’

Savia Lobo
14 Aug 2018
3 min read
Yesterday, the Rust and Web Assembly community made two announcements. Firstly, it released the ‘wasm-bindgen’ 0.2.16 version and second, it published the first release of ‘wasm-bindgen-futures’. wasm-bindgen facilitates high-level communication between JavaScript and Rust compiled to WebAssembly. It allows one to speak in terms of Rust structs, JavaScript classes, strings, etc.,instead of only the integers and floats supported by WebAssembly’s raw calling convention. The wasm-bindgen is designed to support the upcoming “Host Bindings” proposal, which will eliminate the need for any kind of JavaScript shim functions between WebAssembly functions and native DOM functions. What’s new in wasm-bindgen 0.2.16 Added features Added the wasm_bindgen::JsCast trait, as described in RFC #2. Added support for receiving Option<&T> parameters from JavaScript in exported Rust functions and methods and for receiving Option<u32> and other option-wrapped scalars. Added reference documentation to the guide for every #[wasm_bindgen] attribute and how it affects the generated bindings. Changes in this version 0.2.16 Restructured the guide's documentation on passing JS closures to Rust, and Rust closures to JS. Also improved  the guide's documentation on using serde to serialize complex data to JsValue and deserialize JsValues back into complex data. Static methods are now always bound to their JS class, as is required for Promise's static methods. The newly released wasm-bindgen-futures The wasm-bindgen-futures is a crate that bridges the gap between a Rust Future and a JavaScript Promise. It provides two conversions: From a JavaScript Promise into a Rust Future. From a Rust Future into a JavaScript Promise. The two main interfaces in this crate are: JsFuture The JsFuture is constructed with a Promise and can then be used as a Future<Item = JsValue, Error = JsValue>. This Rust future will resolve or reject with the value coming out of the Promise Future_to_promise Future_to_promise interface converts a Rust Future<Item = JsValue, Error = JsValue> into a JavaScript Promise. The future's result will translate to either a rejected or resolved Promise in JavaScript. These two items provide enough of a bridge to interoperate the two systems and make sure that Rust/JavaScript can work together with asynchronous and I/O work. To know more about wasm-bindgen version 0.2.16 and wasm-bindgen-futures visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) Rust 1.28 is here with global allocators, nonZero types and more Say hello to Sequoia: a new Rust based OpenPGP library to secure your apps
Read more
  • 0
  • 0
  • 12898

article-image-microsoft-releases-typescript-3-4-with-an-update-for-faster-subsequent-builds-and-more
Bhagyashree R
01 Apr 2019
3 min read
Save for later

Microsoft releases TypeScript 3.4 with an update for faster subsequent builds, and more

Bhagyashree R
01 Apr 2019
3 min read
Last week, Daniel Rosenwasser, Program Manager for TypeScript, announced the release of TypeScript 3.4. This release comes with faster subsequent builds with the ‘--incremental’ flag, higher order type inference from generic functions, type-checking for globalThis, and more. Following are some of the updates in TypeScript 3.4: Faster subsequent builds TypeScript 3.4 comes with the ‘--incremental’ flag, which records the project graph from the last compilation. So, when TypeScript is invoked with the ‘--incremental’ flag set to ‘true’, it will check for the least costly way to type-check and emit changes to a project by referring to the saved project graph. Higher order type inference from generic functions This release comes with various improvements around inference, one of the main being functions inferring types from other generic functions. At the time of type argument inference, TypeScript will now propagate the type parameters from generic function arguments onto the resulting function type. Updates in ReadonlyArray and readonly tuples Now, using read-only array-like types is much easier. This release introduces a new syntax for ReadonlyArray that uses a new readonly modifier for array types: function foo(arr: readonly string[]) { arr.slice();        // okay arr.push("hello!"); // error! } TypeScript 3.4 also adds support for readonly tuples. To make a tuple readonly, you just have to prefix it with the readonly keyword. Type-checking for globalThis This release supports type-checking ECMAScript’s new globalThis, which is a global variable that refers to the global scope. With globalThis, you can access the global scope that can be used across different environments. The globalThis variable provides a standard way for accessing the global scope which can be used across different environments. Breaking changes As this release introduces few updates in inference, it does come with some breaking changes: TypeScript now uses types that flow into function calls to contextually type function arguments. Now, the type of top-level ‘this’ is typed as ‘typeof globalThis’ instead of ‘any’. As a result, users might get some errors for accessing unknown values on ‘this’ under ‘noImplicitAny’. TypeScript 3.4 correctly measures the variance of types declared with ‘interface’ in all cases. This introduces an observable breaking change for interfaces that used a type parameter only in keyof. To know the full list of updates in TypeScript 3.4, check out the official announcement. An introduction to TypeScript types for ASP.NET core [Tutorial] Typescript 3.3 is finally released! Yarn releases a roadmap for Yarn v2 and beyond; moves from Flow to Typescript
Read more
  • 0
  • 0
  • 12818
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-rust-1-30-releases-with-procedural-macros-and-improvements-to-the-module-system
Sugandha Lahoti
26 Oct 2018
3 min read
Save for later

Rust 1.30 releases with procedural macros and improvements to the module system

Sugandha Lahoti
26 Oct 2018
3 min read
Yesterday, the rust team released a new version of the Rust systems programming language known for its safety, speed, and concurrency. Rust 1.30 comes with procedural macros, module system improvements, and more. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It jumped from being the 46th most popular language on GitHub last year to the 18th position this year. The 2018 survey of the RedMonk Programming Language Rankings marked the entry of Rust in their Top 25 list. It topped the list of the most loved programming language among the developers who took the Stack overflow survey of 2018 survey for a straight third year in the row. Still not satisfied? Here are 9 reasons why Rust programmers love Rust. Key improvements in Rust 1.30 Procedural macros are now available Procedural macros allow for more powerful code generation. Rust 1.30 introduces two different kinds of advanced macros, “attribute-like procedural macros” and “function-like procedural macros.” Attribute-like macros are similar to custom derive macros, but instead of generating code for only the #[derive] attribute, they allow you to create new, custom attributes of your own. They’re also more flexible: derive only works for structs and enums, but attributes can go on other places, like functions. Function-like macros define macros that look like function calls. Developers can now also bring macros into scope with the use keyword. Updates to the Module system The module system has received significant improvements to make it more straightforward and easy to use. In addition to bringing macros into scope, the use keyword has two other changes. First, external crates are now in the prelude. Previously, on moving a function to a submodule, developers would have some of their code break. Now, on moving a function, it will check the first part of the path and see if it’s an extern crate, and if it is, it will use it regardless of where developers are in the module hierarchy. Second, use supports bringing items into scope with paths starting with crate. Previously, paths specified after use would always start at the crate root, but paths referring to items directly would start at the local path, meaning the behavior of paths was inconsistent. Now, the crate keyword at the start of the path will indicate if developers would like the path to start at their crate root. These changes combined will lead to a more straightforward understanding of how paths resolve. Other changes Developers can now use keywords as identifiers using the raw identifiers syntax (r#), e.g. let r#for = true; Using anonymous parameters in traits is now deprecated with a warning and will be a hard error in the 2018 edition. Developers can now catch visibility keywords (e.g. pub, pub(crate)) in macros using the vis specifier. Non-macro attributes now allow all forms of literals, not just strings. Previously, you would write #[attr("true")], now you can write #[attr(true)]. Developers can now specify a function to handle a panic in the Rust runtime with the #[panic_handler] attribute. These are just a select few updates. For more information, and code examples, go through the Rust Blog. 3 ways to break your Rust code into modules Rust as a Game Programming Language: Is it any good? Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 12779

article-image-rust-2018-rc1-now-released-with-raw-identifiers-better-path-clarity-and-other-changes
Prasad Ramesh
21 Sep 2018
3 min read
Save for later

Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes

Prasad Ramesh
21 Sep 2018
3 min read
Rust 2018 RC1 was released yesterday. This new version of the Rust programming language contains features like raw identifiers, better path clarity and other additions. Some of the changes in Rust 2018 RC1 include: Raw identifiers Like many programming languages, Rust too has the concept of "keywords". These identifiers cannot be used in places like variable names, function names, and other places. With Rust 2018 RC1, raw identifiers let you use keywords where they are not allowed normally. New confirmed keywords in Rust 2018 RC1 are async, await, and try. Better path clarity One of the hardest things for people new to Rust is the module system. While there are simple and consistent rules defining the module system, their consequences can appear to be inconsistent and hard to understand. Rust 2018 RC1 introduces a few new module system features to simplify the module system and give a better picture of what is going on. extern crate is no longer needed. The crate keyword refers to the current crate. Absolute paths begin with a crate name, where again the keyword crate refers to the current crate. A foo.rs and foo/ subdirectory may coexist. mod.rs is no longer required when placing submodules in a subdirectory. Anonymous trait parameters are deprecated Parameters in trait method declarations are no longer allowed to be anonymous. In Rust 2015, the following was allowed: trait Foo { fn foo(&self, u8); } In Rust 2018 RC1, all parameters require an argument name (even if it's just _): trait Foo { fn foo(&self, baz: u8); } Non-lexical lifetimes The borrow checker has been enhanced to accept more code. This is performed via a mechanism called ‘non-lexical lifetimes’. Previously, the below code would have produced an error, but now it will compile just fine: fn main() { let mut x = 5; let y = &x; let z = &mut x; } Lifetimes follow "lexical scope". This means that the borrow from y is considered to be held until y goes out of scope at the end of main. This is the case even though y will never be used again in the code. The above code works fine, but in the older versions, the borrow checker was not able handle it. Installation To try and install Rust 2018 RC1 you need to install the Rust 1.30 beta toolchain. This beta is a little different from the normal beta, states the Rust Blog. > rustup install beta > rustc +beta --version rustc 1.30.0-beta.2 (7a0062e46 2018-09-19) The feature flags for Rust 2018 RC1 are turned on and can be used to report issues. These were only a select few changes. Other changes in this beta include Lifetime elison in impl, T: ‘a inference in structs, macro changes etc. For more information and details on the complete list of updates, read the Rust edition guide where the new features are marked as beta. Rust 1.29 is out with improvements to its package manager, Cargo Deno, an attempt to fix Node.js flaws, is rewritten in Rust Creating Macros in Rust [Tutorial]
Read more
  • 0
  • 0
  • 12697

article-image-jdk-11-first-release-candidate-rc-is-out-with-zgc-epsilon-and-more
Bhagyashree R
27 Aug 2018
3 min read
Save for later

JDK 11 First Release Candidate (RC) is out with ZGC, Epsilon and more!

Bhagyashree R
27 Aug 2018
3 min read
On Friday, Oracle released the JDK 11 first Release Candidate. It includes features such as nest-based access control, dynamic class-file constants, improved Aarch64 intrinsics, and more. The general availability of the final release of JDK 11 is scheduled for next month on the 25th. Every six months, in June and December, the community initiates the release cycle for the next JDK feature release. The work proceeds over the next three months in three phases: Rampdown Phase One (RDP 1), Rampdown Phase Two (RDP 2), and Release-Candidate Phase (RC).The durations of the phases for JDK 11 are four weeks for RDP 1, three weeks for RDP 2, and five weeks for RC. What is new in JDK 11 RC 1.0 ? Nest-based access control Nest is introduced to allow classes that are a logically part of the same code entity, but are compiled to distinct class files, access each other’s private members. Dynamic class-file constants To support a new constant-pool form named, CONSTANT_Dynamic, Java class-file format is extended. Loading this pool will delegate creation to a bootstrap method, just as linking an invokedynamic call site delegates linkage to a bootstrap method. Improvements in Aarch64 intrinsics Intrinsics are used to improve performance by leveraging CPU architecture-specific assembly code for a given method, instead of a generic Java code. The existing string and array intrinsics are improved and new intrinsics are implemented for the java.lang.Math package on AArch64 processors: sin (sine trigonometric function) cos (cosine trigonometric function) log (logarithm of a number) Epsilon A new garbage collector named, Epsilon is introduced that handles memory allocation but does not implement any actual memory reclamation mechanism. The JVM will shut down once the available Java heap is exhausted. Java EE and CORBA modules removed These modules are removed from the Java SE Platform and the JDK. Earlier, they were deprecated in the Java SE 9, indicating their removal in a future release. HTTP Client (Standard) The HTTP Client API, introduced as an incubating API in JDK 9 and JDK 10 is standardized. This API received a number of rounds of feedback that resulted in significant improvements. The module name and the package name of the standard API will be java.net.http. Local-variable syntax for lambda parameters When declaring the formal parameters of implicitly typed lambda expressions, the use of ‘var’ is allowed. Now the following expression: (var x, var y) -> x.process(y) is equivalent to: (x, y) -> x.process(y) Unicode 10 The existing platform APIs will support version 10.0 of the Unicode Standard. It is supported in the following classes: In java.lang: Character and String In java.awt.font: NumericShaper In java.text: Bidi, BreakIterator, and Normalizer New Flight Recorder Flight Recorder, a low-overhead data collection framework is provided for troubleshooting Java applications and the HotSpot JVM. Addition of ChaCha20 and Poly1305 cryptographic algorithms An implementation of the ChaCha20 and ChaCha20-Poly1305 ciphers as specified in RFC 7539 are added. ChaCha20 is a relatively new stream cipher that can replace the older, insecure RC4 stream cipher. ZGC (Experimental) The Z Garbage Collector, also known as ZGC, is a scalable low-latency garbage collector. ZGC is a concurrent, single-generation, region-based, NUMA-aware, compacting collector. To know more about these updates and improvements in detail, head over to its official website, OpenJDK. JavaFX 11 to release soon, announces the Gluon team State of OpenJDK: Past, Present and Future with Oracle Mark Reinhold on the evolution of Java platform and OpenJDK  
Read more
  • 0
  • 0
  • 12684

article-image-python-3-8-0-alpha-1-is-now-available-for-testing
Prasad Ramesh
05 Feb 2019
2 min read
Save for later

Python 3.8.0 alpha 1 is now available for testing

Prasad Ramesh
05 Feb 2019
2 min read
Yesterday, the first alpha of Python 3.8.0 was announced in a Python blog post. The most important change in this version is the addition of Assignment Expressions. This is the first alpha, three more are yet to be released. Keep in mind that the features are raw and not meant for production use. Some changes in Python 3.8.0 alpha 1: Security changes When spawning child processes, the command line option -I to run Python in isolated mode is now copied by the multiprocessing and distutils modules as well OpenSSL is updated to OpenSSL 1.1.0i for Windows builds The thread safety of error handling is fixed in _ssl A small fix to prevent buffer overrun in os.symlink for Windows Changes in core and builtins PEP 572: This introduces a new way which assigns values to variables in an expression by using the NAME := expr notation Parenthesis are made optional for named expressions in a while statement. Python initialization is reorganized to get exceptions and sys.stderr early. A small memory leak is fixed in pymain_parse_cmdline_impl. For unbalanced parentheses in f-string, the syntax messages are better. End line and end column position information are added to the Python AST nodes During the Python initialization, the Python filesystem encoding is read faster Library changes Shared memory submodule is added to multiprocessing in order to avoid serialization between processes The KeyError exception when using enums and compile is now fixed help() on metaclasses is fixed The raise(signum) is now exposed as raise_signal Building enums by value are now faster These were a select few changes in Python 3.8.0 alpha 1. For a complete list of changes, you may go through the changelog. Introducing RustPython, a Python 3 interpreter written in Rust EuroPython Society announces the ‘Guido van Rossum Core Developer Grant’ program to honor Python core developers pandas will drop support for Python 2 this month with pandas 0.24
Read more
  • 0
  • 0
  • 12406
article-image-swift-is-now-available-on-fedora-28
Melisha Dsouza
10 Oct 2018
2 min read
Save for later

Swift is now available on Fedora 28

Melisha Dsouza
10 Oct 2018
2 min read
Last week, the Fedora team announced that Swift will be available in Fedora 28.  Swift, Apple’s programming language, is built with a modern approach to safety and its addition to Fedora will facilitate Linux’s focus on the security aspect of its kernel. Why did the team opt for Swift? Swift’s applications are endless- right from systems programming to desktop applications leading right upto cloud services. This language was always focussed on being fast and safe. There is automatic memory management where arrays and integers are checked for overflow. Swift also supports a built-in mechanism for error handling. It is an efficient server-side programming language which performs fast iterations over collections of code. Additional features include: Closures with function pointers Tuples and multiple return values Generics Structs supporting methods, extensions, and protocols Functional programming patterns, like map and filter do, guard, defer, and repeat keywords provide an advanced control flow Swift is available in Fedora under the package name swift-lang. The flexible capabilities of Fedora coupled with the advantages offered by Swift make it an excellent choice for developers to work on. To know more about this news, head over to Fedora’s magazine. ABI stability may finally come in Swift 5.0 Swift 4.2 releases with language, library and package manager updates! Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 12247

article-image-erlang-turns-20-tracing-the-journey-from-ericsson-to-whatsapp
Amrata Joshi
10 Dec 2018
3 min read
Save for later

Erlang turns 20: Tracing the journey from Ericsson to Whatsapp

Amrata Joshi
10 Dec 2018
3 min read
Just two days back, Erlang, a functional programming language turned twenty. Erlang has been one of the most popular open source languages with compelling features like concurrent processes, memory management, scheduling, distribution, networking, etc.WhatsApp, the most popular messaging platform’s server is almost completely implemented in Erlang. Twenty years back, on 8th December 1998, Ericsson released its development environment, Erlang/OTP (Open Telecom Platform), as an open source. It was used to make the process of building telecommunications products easier with functionalities like speed, distribution, and concurrency. It also supports a number of processors and operating systems and can be easily integrated with different development languages. Erlang fosters Ericsson’s GPRS, 3, and 4G/LTE and it also powers the internet and mobile data networks. How did Erlang become open source? When Håkan Millroth, head of the Ericsson Software Architecture Lab suggested his team to try ‘open source’, Jane Walerud, an entrepreneur agreed to it and convinced the entire Ericsson management team to release the source code for the Erlang VM. Erlang was released without any publicity, marketing buzz or media coverage. Ericsson just sent an email to Erlang’s mailing list and an announcement was posted on slashdot. During the dot-com bust era, when an extreme growth in the usage and adaptation of the Internet was observed, Erlang/OTP was used in the creation of an XMPP based instant messaging server, ejabberd,developed by Alexey Shchepin. He chose Erlang over all the other languages as it was the most suitable language for implementing a Jabber server. Ejabberd 1.0 was released in December 2005 and it formed a base for many platforms like WhatsApp. Ejabberd showed a 280% increase in throughput when it was compiled to the latest version of Erlang. In May of 2005, a version of the BEAM VM also known as Erlang VM was released that proved the Erlang concurrency and programming models are ideal for multi-core architectures. In May 2006, Erlang was also used to program RabbitMQ, an Advanced Message Queuing Protocol (AMQP). Post that, Erlang has become the language of choice for most of the messaging solutions and is now the backbone of thousands of systems. In 2007, ‘Programming Erlang’ by Joe Armstrong got published by the Pragmatic Programmers. Further in June 2008, the first paper copy of Erlang Programming got publicly available. In 2011, Elixir, a functional and concurrent programming language that runs on Erlang VM was released. In August 2015, Phoenix 1.0, a framework for web applications was released. Phoenix 1.0 uses Erlang VM capabilities to create the same effect as Rails did on Ruby, by bringing making Elixir, popular. Read more about this news on Erlang’s blog post. Elixir 1.7, the programming language for Erlang virtual machine, releases Phoenix 1.4.0 is out with ‘Presence javascript API’, HTTP2 support, and more! Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!
Read more
  • 0
  • 0
  • 12176

article-image-rust-1-33-0-released-with-improvements-to-const-fn-pinning-and-more
Amrata Joshi
01 Mar 2019
2 min read
Save for later

Rust 1.33.0 released with improvements to Const fn, pinning, and more!

Amrata Joshi
01 Mar 2019
2 min read
Yesterday, the team at Rust announced the stable release, Rust 1.33.0, a programming language that helps in building reliable and efficient software. This release comes with significant improvements to const fns, and the stabilization of a new concept: "pinning." What's new in Rust 1.33.0? https://twitter.com/rustlang/status/1101200862679056385 Const fn It’s now possible to work with irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... }). This release also offers let bindings (e.g. let x = 1;). It also comes with mutable let bindings (e.g. let mut x = 1;) Pinning This release comes with a new concept for Rust programs called pinning. Pinning ensures that the pointee of any pointer type for example P has a stable location in memory. This means that it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. And the pointee is said to be "pinned". Compiler It is now possible to set a linker flavor for rustc with the -Clinker-flavor command line argument. The minimum required LLVM version is 6.0. This release comes with added support for the PowerPC64 architecture on FreeBSD and x86_64-unknown-uefi target. Libraries In this release, the methods overflowing_{add, sub, mul, shl, shr} are const functions for all numeric types. Now the is_positive and is_negative methods are const functions for all signed numeric types. Even the get method for all NonZero types is now const. Language It now possible to use the cfg(target_vendor) attribute. E.g. #[cfg(target_vendor="apple")] fn main() { println!("Hello Apple!"); }. It is now possible to have irrefutable if let and while let patterns. It is now possible to specify multiple attributes in a cfg_attr attribute. One of the users commented on the HackerNews, “This release also enables Windows binaries to run in Windows nanoserver containers.” Another comment reads, “It is nice to see the const fn improvements!” https://twitter.com/AndreaPessino/status/1101217753682206720 To know more about this news, check out Rust’s official post. Introducing RustPython, a Python 3 interpreter written in Rust How Deliveroo migrated from Ruby to Rust without breaking production Rust 1.32 released with a print debugger and other changes  
Read more
  • 0
  • 0
  • 12120
article-image-facebook-releases-skiplang-a-general-purpose-programming-language
Prasad Ramesh
01 Oct 2018
2 min read
Save for later

Facebook releases Skiplang, a general purpose programming language

Prasad Ramesh
01 Oct 2018
2 min read
Facebook released Skip or Skiplang last week, a language it has been developing since 2015. It is a general-purpose programming language that provides caching with features like reactive invalidation, safe parallelism, and efficient garbage collection. Skiplang features Skiplang's primary goal is to explore language and runtime support for correct, efficient memoization-based caching and cache invalidation. It achieves this via a static type system that carefully tracks mutability. The language is typed statically and compiled ahead-of-time using LLVM to produce executables that are highly optimized. Caching with reactive invalidation The main new language feature in Skiplang is its precise tracking of side effects. It includes both mutability of values and distinguishing between non-deterministic data sources. This distinguishing includes data sources that can provide reactive invalidations that tell Skiplang when data has changed. Safe parallelism Skiplang has support for two complementary forms of concurrent programming. Both forms avoid the usual thread safety issues due to the language's tracking of side effects. This language also supports ergonomic asynchronous computation with async/await syntax. Asynchronous computations cannot refer to mutable state and are therefore safe to execute in parallel allowing independent async continuations to continue in parallel. Skiplang also has APIs for direct parallel computation, again using its tracking of side effects it prevents thread safety issues like shared access to mutable state. An efficient and predictable garbage collector Skiplang’s approach to memory management combines aspects of typical garbage collectors with more straightforward linear allocation schemes. The garbage collector only has to scan the memory that is reachable from the root of a computation. This allows developers to write code with predictable garbage collector overhead. A hybrid functional object-oriented language Skiplang is a mix of ideas from functional and object-oriented styles. They are all carefully integrated to form a cohesive language. Like other functional languages, it is expression-oriented and supports features like abstract data types, pattern matching, easy lambdas, higher-order functions, and enforcing pure/referentially-transparent API boundaries (optional). Like OOP languages, it supports classes with inheritance, mutable objects, loops, and early returns. In addition to these, Skiplang also incorporates ideas from “systems” languages supporting low-overhead abstractions, and compact memory layout of objects. Know more about the language from the Skiplang website and their GitHub repository. JDK 12 is all set for public release in March 2019 Python comes third in TIOBE popularity index for the first time Michael Barr releases embedded C coding standards
Read more
  • 0
  • 0
  • 12108

article-image-exclusivity-enforcement-is-now-complete-in-swift-5
Prasad Ramesh
06 Feb 2019
2 min read
Save for later

Exclusivity enforcement is now complete in Swift 5

Prasad Ramesh
06 Feb 2019
2 min read
Yesterday Apple talked about exclusivity enforcement in Swift 5, in a post. No this is not some exclusive feature or patenting of some sort. This idea is on how variables in a Swift program access memory. Swift is the programming language used for developing Apple apps. What is exclusivity enforcement? The Swift 5 release allows runtime checks on “Exclusive Access to Memory”. This further adds to Swift showing that it is a ‘safe language’. For memory safety to take place, Swift needs exclusive access to a variable and modify it. This means that the variable can be accessed only with the same name when it is being modified as particular arguments. A programmer’s intention in case of exclusivity violations is often ambiguous in Swift. So, to protect against it and to allow the safety features, exclusivity enforcement was introduced in Swift 4. In Swift 4, both compile-time and run-time enforcement was available, the latter being available only in debug builds. Some of the holes in exclusivity enforcement are patched in Swift 5 by changing the language model. So runtime exclusivity enforcement is enabled by default in Release builds. This can impact Swift projects in two ways: Violation of Swift exclusivity rules causing a runtime trap Overhead due to memory access checks can degrade performance slightly Why exclusivity enforcement? This enforcement is done mainly to enforce memory safety in Swift. It eliminates dangerous interactions in Swift programs which involves mutable states Enforcement gets rid of unspecified behavior rules from Swift It is mandatory to maintain ABI stability In addition to protecting memory safety, this enforcement helps in optimizing performance The exclusivity rules give programmers the control to move only types Even though the memory problem is a rare occurrence, addressing it early on improves Swift a bit. A comment on Hacker news says: “The benefit being that you only have to deal with this issue rarely, rather than all the time with manual memory management.” Apple is patenting Swift features like optional chaining Swift 5 for Xcode 10.2 beta is here with stable ABI Swift is now available on Fedora 28
Read more
  • 0
  • 0
  • 12043
Modal Close icon
Modal Close icon