Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Languages

202 Articles
article-image-oracle-reveals-issues-in-object-serialization-plans-to-drop-it-from-core-java
Pavan Ramchandani
31 May 2018
2 min read
Save for later

Oracle reveals issues in Object Serialization. Plans to drop it from core Java.

Pavan Ramchandani
31 May 2018
2 min read
The Java team is planning to remove the Java Serialization feature from core Java language. This is owing to some security issues with the object serialization API. What is Java’s Object Serialization feature? The Serialization API converts the message in a data communication system into a sequence of bytes which can be processed further. The sequence of bytes is made into an object and written in a Java file. This file can be read and deserialized to recreate the message in the memory. Why is Oracle calling it a mistake? Approximately one-third of all the vulnerabilities in the Java systems have serialization involved. Mark Reinhold, chief Architect from Oracle mentioned that Oracle has been receiving reports that revealed the security weakness in Java Serialization. They have found a lot of applications servers receive serialization data streams on unprotected ports of a server. The attackers can use the easy use case of the serialized object and deserialize to recreate the object. Adding to the overhaul, Reinhold called the serialization feature as a “horrible mistake” made in 1997. To counteract these vulnerabilities, Oracle has added a filtering capability in Java to provide a defense mechanism for the network using serialization and receiving untrusted data streams. Oracle also mentioned their plans to remove serialization from Java as a long-term plan under Project Amber, focussed on streamlining the release cycle of Java and to enhance the productivity of the Java language in the Java 11 release. Looking ahead To continue to support the serialization in Java language, Oracle is planning to add a Serialization feature that will enable object serialization in a safe way. The framework will also be developed to support graphs, that supports JSON or XML to provide serialization of any record. Why Oracle is losing the Database Race What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 What can you expect from Java 11
Read more
  • 0
  • 0
  • 16248

article-image-ruby-2-6-0-released-with-a-new-jit-compiler
Prasad Ramesh
26 Dec 2018
2 min read
Save for later

Ruby 2.6.0 released with a new JIT compiler

Prasad Ramesh
26 Dec 2018
2 min read
Ruby 2.6.0 was released yesterday and brings a new JIT compiler. The new version also has the RubyVM::AbstractSyntaxTree module. The new JIT compiler in Ruby 2.6.0 Ruby 2.6.0 comes with an early implementation of a Just-In-Time (JIT) compiler. The JIT compiler was introduced in Ruby to improve the performance of programs made with Ruby. Traditional JIT compilers operate in-process but Ruby’s JIT compiler gives out C code to the disk and generates a common C compiler to create native code. To enable the JIT compiler, you just need to specify --jit either on the command line or in the $RUBYOPT environment variable. Using --jit-verbose=1 will cause the JIT compiler to print additional information. The JIT compiler will work only when Ruby is built by GCC, Clang, or Microsoft Visual C++. Any of these compilers need to be available at runtime. On Optcarrot, a CPU intensive benchmark, Ruby 2.6 has 1.7x faster performance compared to Ruby 2.5. The JIT compiler, however, is still experimental and workloads like Rails might not benefit from for now. The RubyVM::AbstractSyntaxTree module Ruby 2.6 brings the RubyVM::AbstractSyntaxTree module and the team does not guarantee any future compatibility of this module. The module has a parse method, which parses the given string as Ruby code and returns the Abstract Syntax Tree (AST) nodes in the code. The given file is opened and parsed by the parse_file method as Ruby code, this returns AST nodes. A RubyVM::AbstractSyntaxTree::Node class—another experimental feature—is also introduced in Ruby 2.6.0. Developers can get source location and children nodes from the Node objects. To know more about other new features and improvements in detail, visit the Ruby 2.6.0 release notes. 8 programming languages to learn in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility NumPy drops Python 2 support. Now you need Python 3.5 or later.
Read more
  • 0
  • 0
  • 16110

article-image-rust-1-37-0-releases-with-support-for-profile-guided-optimization-built-in-cargo-vendor-and-more
Bhagyashree R
16 Aug 2019
4 min read
Save for later

Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more

Bhagyashree R
16 Aug 2019
4 min read
After releasing version 1.36.0 last month, the team behind Rust announced the release of Rust 1.37.0 yesterday. Among the highlights of this version are support for referring to enum variants via type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, and more. Key updates in Rust 1.37.0 Referring to enum variants through type aliases Starting with this release, you can refer to enum variants through type aliases in expression and pattern contexts. Since Self behaves like a type alias in implementations, you can also refer to enum variants with Self::Variant. Built-in Cargo support for vendored dependencies Until now the cargo vendor command was available as a separate crate to developers. Starting with Rust 1.37.0, it is integrated directly into Cargo, the Rust package manager, and crate host. This Cargo subcommand fetches all the crates.io and git dependencies for a project into the vendor/ directory. It also shows the configuration necessary to use the vendored code during builds. Using unnamed const items for macros Rust 1.37.0 allows you to create unnamed const items. So, instead of giving an explicit name to your constants, you can name them as ‘_’. This update will enable you to easily create ergonomic and reusable declarative and procedural macros for static analysis purposes. Support for profile-guided optimization Rust’s compiler, rustc now supports Profile-Guided Optimization (PGO) through the -C profile-generate and -C profile-use flags. PGO allows the compiler to optimize your code based on feedback for real workloads. It optimizes a program in the following two steps: The program is first built with instrumentation inserted by the compiler. This is achieved by passing the -C profile-generate flag to rustc. This instrumented program is then run on sample data and the profiling data is written to a file. The program is built again, however, this time the collected profiling data is fed into rustc by using the -C profile-use flag. This build will use the collected data to enable the compiler to make better decisions about code placement, inlining, and other optimizations. Choosing a default binary in Cargo projects The cargo run command allows you to run a binary or example of the local package enabling you to quickly test CLI applications. It often happens that there are multiple binaries present in the same packages. In such cases, developers need to explicitly mention the name of the binary they want to run with the --bin flag. This makes the cargo run command not as ergonomic, especially when we are calling a binary more often than the others. To solve this issue, Rust 1.37.0 introduces a new key in Cargo.toml called default-run. Declaring this key in the [package] section will make the cargo run command default to the chosen binary if the --bin flag is not passed. Developers have already started testing out this new release. A developer who used  profile-guided optimization shared his experience on Hacker News, “The effect is very dependent on program structure and actual code running, but for a suitable application it's reasonable to expect anything from 5-15%, and sometimes much more (see e.g. Firefox reporting 18% here).” Others also speculated that async/await will come in Rust 1.39. “Seems like async/await is going to slip into Rust 1.39 instead.” Another user said, “Congrats! Like many I was looking forward to async/await in this release but I'm happy they've taken some extra time to work through any existing issues before releasing it.” Check out the official announcement by the Rust team to know more in detail. Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices
Read more
  • 0
  • 0
  • 16090

article-image-a-cargo-vulnerability-in-rust-1-25-and-prior-makes-it-ignore-the-package-key-and-download-a-wrong-dependency
Bhagyashree R
01 Oct 2019
3 min read
Save for later

A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency

Bhagyashree R
01 Oct 2019
3 min read
Yesterday, the Rust team shared that a Cargo vulnerability confuses the older versions of Cargo making them ignore the new package rename feature and download a wrong dependency. This vulnerability, tracked as CVE-2019-16760, affects Rust 1.0 through Rust 1.25. The vulnerability was first reported to the Rust team by Elichai Turkel: https://twitter.com/Elichai2/status/1178681807170101248 Details of Cargo vulnerability Rust 1.31 introduced the package configuration key for renaming dependencies in the ‘Cargo.toml’ manifest file. In Rust 1.25 and prior, Cargo ignores its usage to rename dependencies and may end up downloading a wrong dependency. It affects not only manifests that are written locally, but also those that are published to crates.io. “If you published a crate, for example, that depends on `serde1` to crates.io then users who depend on you may also be vulnerable if they use Rust 1.25.0 and prior. Rust 1.0.0 through Rust 1.25.0 is affected by this advisory because Cargo will ignore the `package` key in manifests,” the team wrote. This vulnerability does not affect Rust 1.26 through Rust 1.30 versions and will throw an error as the package key is unstable in these versions. Rust 1.31 and later are not affected because Cargo understands the package key. Mitigation steps to prevent this Cargo vulnerability The team has already audited the existing crates using the package key published to crates.io and have not detected any exploit of this vulnerability. However, they have recommended users of the affected versions to update their compiler to either 1.26 or later. The team further wrote, “We will not be issuing a patch release for Rust versions prior to 1.26.0. Users of Rust 1.19.0 to Rust 1.25.0 can instead apply the provided patches to mitigate the issue.” This news sparked a discussion on Reddit where developers discussed how this could have been avoided. A user commented, “What do we learn from this? Always throw an error if you encounter an unknown key inside a known configuration object.” Another user suggested, “It would be better to have the config contain a "minimum allowed cargo version", and if you want to use new features you have to bump this version number to at least the version which added the feature. Old versions of cargo can detect the version number and automatically refuse to compile the crate if the minimum version is newer than the cargo version.” Read the official announcement by the Rust team to know more about this vulnerability in detail. Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations  
Read more
  • 0
  • 0
  • 15938

article-image-ian-lance-taylor-golang-team-member-adds-another-perspective-to-go-being-googles-language
Sugandha Lahoti
28 May 2019
6 min read
Save for later

Ian Lance Taylor, Golang team member, adds another perspective to Go being Google's language

Sugandha Lahoti
28 May 2019
6 min read
Earlier this month, the Hacker News community got into a heated debate on whether “Go is Google’s language, and not the community’s”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google’s project.” In response to his statements, last Thursday, Ian Lance Taylor, a googler and member of the Golang team added his own views on a Google group mailing list, that don't necessarily contradict Chris’s blog post but add some nuance. Ian begins with a disclaimer: “I'm speaking exclusively for myself here, not for Google nor for the Go team.” He then reminds us that Go is an open source language considering all the source code, including for all the infrastructure support, is freely available and may be reused and changed by anyone. Go provides all developers the freedom to fork and take an existing project in a new direction.  He further explains how there are 59 Googlers and 51 non-Googlers on the committers list which includes the set of people who can commit changes to the project. He says, “so while Google is the majority, it's not an overwhelming one.” Golang has a small core committee which makes decisions Contradicting Chris’s opinion of how Golang is only run by a small set of people which prevents it from becoming the community’s language, he says, “All successful languages have a small set of people who make the final decisions. Successful languages pay attention to what people want, but to change the language according to what most people want is, I believe, a recipe for chaos and incoherence.  I believe that every successful language must have a coherent vision that is shared by a relatively small group of people.” He then adds, “Since Go is a successful language, and hopes to remain successful, it too must be open to community input but must have a small number of people who make final decisions about how the language will change over time.” This makes sense. The core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more than a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Ian defends Google as a company being a member of the Golang team, saying they are doing more work with Go at a higher level, supporting efforts like the Go Cloud Development Kit and support for Go in Google Cloud projects like Kubernetes. He also assures that executives, and upper management in general, have never made any attempt to affect how the Go language and tools and standard library are developed. “Google, apart from the core Go team, does not make decisions about the language.” What if Golang is killed? He is unsure of what will happen if someone on the core Go team decides to leave Google but wants to continue working on Go. He says, “many people who want to work on Go full time wind up being hired by Google,  so it would not be particularly surprising if the core Go team continues to be primarily or exclusively Google employees.” This reaffirms our original argument of Google having a propensity to kill its own products. While Google’s history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. He further adds, “ It's also possible that someday it will become appropriate to create some sort of separate Go Foundation to manage the language.”  But did not specify what such a foundation would look like, who its members will be, and how the governance model will be like. Will Google leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project? Or would they let the authors only work on Golang in their spare time at home or at the weekends? Another common argument is on what Google has invested to keep Go thriving and if, the so-called Go foundation will be able to sustain a healthy development cycle with low monetary investments (although GitHub sponsors can, maybe, change that). A comment on Hacker News reads, “ Like it or not, Google is probably paying around $10 million a year to keep senior full-time developers around that want to work on the language. That could be used as a benchmark to calculate how much of an investment is required to have a healthy development cycle. If a community-maintained fork is created, it would need time and monetary investment similar to what Google is doing just to maintain and develop non-controversial features. Question is: Is this assessment sensible and if so, is the community able or willing to make this kind of investment?” In general, though, most people/developers agreed with Ian. Here are a few responses from the same mailing list: “I just want to thank Ian for taking the time to write this. I've already got the idea that it worked that way, but my own deduction process, but it's good to have a confirmation from inside.” “Thank you for writing your reply Ian. Since it's a rather long post I don't want to go through it point by point, but suffice it to say that I agree with most of what you've written.” Read Ian’s post on Google Forums. Is Golang truly community driven and does it really matter? Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch
Read more
  • 0
  • 0
  • 15866

article-image-julia-1-0-has-just-been-released
Richard Gall
09 Aug 2018
3 min read
Save for later

Julia 1.0 has just been released

Richard Gall
09 Aug 2018
3 min read
The release of Julia 1.0 has been eagerly anticipated - but it's finally here. At JuliaCon2018 in London the team got together to mark the project's landmark step. Take a look at the video below: It's taken more than six years for Julia to hit this milestone. The language was first launched in February 2012, and since then it has grown slowly to become a popular high-level dynamic programming language. The projects aims for Julia have been hugely ambitious since the start. As the team said in this post back in 2012: "We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled." However, despite the level of ambition it hasn't quite yet managed to expand beyond its core strength: numerical computing. However, that could change with version 1.0. What new features are in Julia 1.0? The team behind Julia are keen to stress that Julia 1.0 offers greater stability than the language ever has. They explain in a blog post announcing the new release: "The single most significant new feature in Julia 1.0, of course, is a commitment to language API stability: code you write for Julia 1.0 will continue to work in Julia 1.1, 1.2, etc. The language is “fully baked.” The core language devs and community alike can focus on packages, tools, and new features built upon this solid foundation." There are also many other new features, including: A new built in package manager Simplified scope rules Improved consistency in all of Julia's APIs A new canonical representation for missing values You can find out more about the new features here. Hang on... wasn't Julia 0.7 just released? Yes, Julia 0.7 has been released alongside version 1.0. This was done "to provide an upgrade path for packages and code that predates the 1.0 release." Version 0.7 simply includes deprecation warnings that aren't included in version 1.0. How to get started with Julia 1.0 If you're ready to get started on Julia 1.0 you can download it here. It's advised that if you're currently using Julia 0.6 or earlier, you should start with the 0.7 release - the deprecation warnings in Julia 0.7 act as a guide through the upgrade process.
Read more
  • 0
  • 0
  • 15796
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-swift-is-improving-the-ui-of-its-generics-model-with-the-reverse-generics-system
Sugandha Lahoti
16 Apr 2019
4 min read
Save for later

Swift is improving the UI of its generics model with the “reverse generics” system

Sugandha Lahoti
16 Apr 2019
4 min read
Last week, Joe Groff of the Swift Core Team published a post on the Swift forums discussing refining the Swift Generics model which was established by the Generics Manifesto, almost three years ago. The post introduced new changes to improve the UI of how generics work in the Swift language. The first part of this group of changes is the SE-0244 proposal. This proposal introduces some features around function return values. SE-0244 proposal The SE-0244 proposal addresses the problem of type-level abstraction for returns.  At present Swift has three existing generics features in Swift 5. Type-level abstraction The syntax of a type level abstraction is quite similar to generics in other languages, like Java or C#. In this users type out the function definitions, use angle brackets, and conventionally use T for a generic type, all of it happening at the function (or type) level. Each of the functions for Type-level abstraction has a placeholder type T. Each call site then gets to pick what concrete type is bound to T, making these functions very flexible and powerful in a variety of situations. Value-level abstraction Value-level abstraction deals with individual variables. It is not concerned with making general statements about the types that can be passed into or out of a function; instead, developers need to worry only about the specific type of exactly one variable in one place. Existential type Many swift libraries consist of composable generic components which provide primitive types along with composable transformations to combine and modify primitive shapes into more complex ones. These transformations may be composed by using the existential type instead of generic arguments. Existential types are like wrappers or boxes for other types. However, they bring more dynamism and runtime overhead than desired. If a user wants to abstract the return type of a declaration from its signature, existentials or manual type erasure are the two choices. However, these come with their own tradeoffs. Tradeoffs of existing generics features The biggest problem of the original genetic manifesto is generalized existentials. Present existentials have a variety of use cases that could never be addressed. Although existentials would allow functions to hide their concrete return types behind protocols as implementation details, they would not always be the most desirable tool for this job. This is because they don’t allow functions to abstract their concrete return types while still maintaining the underlying type's identity in the client code. Also, Swift follows in the tradition of similar languages like C++, Java, and C# in its generics notation, using explicit type variable declarations in angle brackets. However, this notation can be verbose and awkward. So new improvements need to be made for existing notations for generics and existentials. Reverse generics Currently Swift has no way for an implementation to achieve type-level abstraction of its return values independent of the caller's control. If an API wants to abstract its concrete return type from callers, it must accept the tradeoffs of value-level abstraction. If those trade-offs are unacceptable, the only alternative in Swift today is to fully expose the concrete return type. These tradeoffs led to the introduction of a new type system feature to achieve type-level abstraction of a return type. Coined as reverse generics by Manolo van Ee, this system behaves similar to a generic parameter type, but whose underlying type is bound by the function's implementation rather than by the caller. This is analogous to the roles of argument and return values in functions; a function takes its arguments as inputs and uses them to compute the return values it gives back to the caller. This process has already begun with a formal review in progress on SE-244: Opaque Result Types. This proposal covers the “reverse generics” idea and some keyword in return types. “If adopted”, says Tim Ekl, a Seattle-area software developer, “it would give us the ability to return a concrete type hidden from the caller, indicating only that the returned value conforms to some protocol(s)”. He has also written an interesting blog post summarizing the discussion by Joe Groff on the swift forums page. Note: The content of this article is taken from Joe Groff’s discussion. For extensive details, you may read the full discussion on the Swift forums page. Swift 5 for Xcode 10.2 is here! Implementing Dependency Injection in Swift [Tutorial] Apple is patenting Swift features like optional chaining
Read more
  • 0
  • 0
  • 15715

article-image-red-hat-team-announces-updates-to-the-red-hat-certified-engineer-rhce-program
Amrata Joshi
12 Apr 2019
3 min read
Save for later

Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program

Amrata Joshi
12 Apr 2019
3 min read
The Red Hat Certified Engineer (RHCE) certification program has certified skilled IT professionals for around 20 years now. This program has also one of the leading certification programs for Linux skills. As new technologies are coming up and industries are equally evolving, the focus has now shifted to hybrid cloud implementations. With this new development, shift automation has become an important skill to learn for Linux system administrators. So, the team behind RHCE thought that there is a need for evolving the RHCE program for Red Hat Certified Professionals. What changes are expected? In the updated RHCE program, the team is shifting the focus to automation of Linux system administration tasks with the help of Red Hat Ansible Automation and will also be changing the requirements for achieving an RHCE credential. With the upcoming release of Red Hat Enterprise Linux 8, the team at RHCE will be offering a new course and a new certification exam. Red Hat System Administration III: Linux Automation (RH294) The team at RHCE has designed this course for Linux system administrators and developers who are into automating provisioning, configuration, application deployment, and orchestration. The ones’ taking up this course will learn how to install and configure Ansible on a management workstation and will get a clear idea about preparing managed hosts for automation. Red Hat Certified Engineer exam (EX294) The RHCE exam will focus on the automation of Linux system administration tasks that uses Red Hat Ansible Automation and shell scripting. The ones who pass this new exam will become RHCEs. What will remain the same? Ken Goetz, vice president of Training and Certification at Red Hat writes in a blog post, “One thing that we want to assure you is that this is not a complete redesign of the program.” The candidates can still get an RHCE by having first passed the Red Hat Certified System Administrator exam (EX200) and then later passing an RHCE exam while still being an RHCSA. The Red Hat Enterprise Linux 7 based RHCE exam (EX300) will remain available for a year post the new exam gets released. How does it impact candidates? Current RHCE The RHCE certification is valid for three years from the date the candidate has become an RHCE. The period of the RHCE can be extended by earning additional certifications that can be applied towards becoming Red Hat Certified Architect in infrastructure. Candidates can renew the RHCE before it becomes non-current by passing the new RHCE exam (EX294). Aspiring RHCE An RHCSA who is progressing towards becoming an RHCE can continue preparing for the Red Hat Enterprise Linux 7 version of the course and take the current RHCE exam (EX300) till June 2020. Or else they can prepare for the new exam (EX294), based on the upcoming release of Red Hat Enterprise Linux 8. Red Hat Certified Specialist in Ansible Automation The ones who are currently Red Hat Certified Specialist in Ansible Automation can continue to demonstrate their Ansible automation skills and knowledge by earning RHCE via the new process. Ken Goetz, vice president of Training and Certification at Red Hat writes in the post, “We are aligning the RHCE program, and the learning services associated with that program, to assist individuals and organizations in keeping up with these changes in the industry.”   To know more about this news, check out Red Hat’s blog post. Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)  
Read more
  • 0
  • 0
  • 15651

article-image-researchers-highlight-impact-of-programming-languages-on-code-quality-and-reveal-flaws-in-the-original-fse-study
Amrata Joshi
07 Jun 2019
7 min read
Save for later

Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study

Amrata Joshi
07 Jun 2019
7 min read
Researchers from Northeastern University, University of Massachusetts, Amherst and Czech Technical University in Prague, published a paper on the Impact of Programming Languages on Code Quality which is a reproduction of work by Ray et al published in 2014 at the Foundations of Software Engineering (FSE) conference. This work claims to reveal an association between 11 programming languages and software defects in projects that are hosted on GitHub. The original paper by Ray et al was well-regarded in the software engineering community because it was nominated as the Communication of the ACM (CACM) research highlight. And the above mentioned researchers conducted a study to validate the claims from the original work. They used the experimental repetition method, which was partially successful and found out that association of 10 programming languages with defects is true. Then they conducted an independent reanalysis which revealed a number of flaws in the original study. And finally the results suggested that only four languages are significantly associated with the defects, and the effect size (correlation between two variables) for them is extremely small. Let us take a look at all the 3 researches in brief: 2014 FSE paper: Does programming language choice affect software quality? The question that arises from the study by Ray et al. published at the 2014 Foundations of Software Engineering (FSE) conference is, What is the effect of programming language on software quality? The results reported in the FSE paper and later repeated in the followup works are based on an observational study of a corpus of 729 GitHub projects that are written in 17 programming languages. For measuring the quality of code, the authors have identified, annotated, and tallied commits that are supposed to indicate bug fixes. Then the authors fitted a Negative Binomial regression against the labeled data for answering the following research questions: RQ1 Are some languages more defect prone than others? The original paper concluded that “Some languages have a greater association with defects than others, although the effect is small.” The conclusion was that Haskell, Clojure, TypeScript, Scala and Ruby were less error-prone whereas C, JavaScript, C++, Objective-C, PHP, and Python were more error-prone. RQ2 Which language properties relate to defects? The original study concluded that “There is a small but significant relationship between language class and defects. Functional languages have a smaller relationship to defects than either procedural or scripting languages.” It could be concluded that functional and strongly typed languages showed fewer errors, whereas the procedural, unmanaged languages and weakly typed induced more errors. RQ3 Does language defect proneness depend on domain? A mix of automatic and manual methods has been used for classifying projects into six application domains. The paper concluded that “There is no general relationship between domain and language defect proneness”. It can be noted that the variation in defect proneness comes from the languages themselves which makes the domain a less indicative factor. RQ4 What’s the relation between language & bug category? The study concluded that “Defect types are strongly associated with languages. Some defect type like memory error, concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall.” It can be concluded that for memory, languages with manual memory management have more errors. Also, Java stands out as the only garbage collecting language that is associated with more memory errors. Whereas for concurrency, Python, JavaScript, etc have fewer errors than languages with concurrency primitives. Experimental repetition method performed to obtain similar results from the original study The original study used methods of data acquisition, data cleaning, and statistical modeling. The researchers then planned for experimental repetition. Their first objective was to repeat the analyses of the FSE paper and obtain the same results. They used an artifact coming from the original authors that had 3.45 GB of processed data and 696 lines of R code for loading the data and performing statistical modeling. According to a repetition process, a script generates results and match the results in the published paper. The researchers wrote new R scripts for mimicking all of the steps from the original manuscript. They then found out that it is essential to automate the production of all tables, numbers, and graphs to iterate multiple times. Researchers concluded that the repetition was partly successful and according to them: RQ1 produced small differences but had qualitatively similar conclusions. The researchers were mostly able to replicate RQ1 based on the artifact provided by the authors. The researchers found 10 languages with a statistically significant association with errors, instead of the eleven reported RQ2 could have been repeated but they noted issues with language classification. They noted that RQ3 could not be repeated as the code was missing and also because their reverse engineering attempts failed. RQ4 could not be repeated because of irreconcilable differences in the data.  Another reason was that the artifact didn't contain the code which implemented the NBR for bug types. The Reanalysis method confirms flaws in the FSE study Their second objective was to carry out a reanalysis of RQ1 of the FSE paper. i.e., Whether some languages are more defect prone than others? The reanalysis differs from repetition as it proposes alternative data processing and statistical analyses for addressing methodological weaknesses of the original work. The researchers then used methods such as data processing, data cleaning, statistical modeling. According to the researchers, the p-values for Objective-C, JavaScript, C, TypeScript, PHP, and Python fall outside of the “significant” range of values. Thus, 6 of the original 11 claims have been discarded at this stage. Controlling the FDR increased the p-values slightly, but did not invalidate additional claims. The p-value for one additional language, Ruby, lost its significance and even Scala is out of the statistically significant set. And a smaller p-value (≤ 0.05) indicates stronger evidence against the null hypothesis, so the null hypothesis can be rejected in that case. In the table below, grey cells are indicating disagreement with the conclusion of the original work and which include, C, Objective-C, JavaScript, TypeScript, PHP, and Python. So, the reanalysis has failed to validate most of the claims and the multiple steps of data cleaning and improved statistical modeling have also invalidated the significance of 7 out of 11 languages. Image source: Impact of Programming Languages on Code Quality The researchers conclude that the work by the Ray et al. has aimed to provide evidence for one of the fundamental assumptions in programming language research, that is language design matters. But the researchers have identified numerous problems in the FSE study that has invalidated its key result. The paper reads, “Our intent is not to blame, performing statistical analysis of programming languages based on large-scale code repositories is hard. We spent over 6 months simply to recreate and validate each step of the original paper.” The researchers’ contribution provides thorough analysis and discussion of the downfalls associated with statistical analysis of large code bases. According to them, statistical analysis combined with large data corpora is a powerful tool that might even answer the hardest research questions but the possibility of errors—is enormous. The researchers further state that “It is only through careful re-validation of such studies that the broader community may gain trust in these results and get better insight into the problems and solutions associated with such studies.” Check out the paper On the Impact of Programming Languages on Code Quality for more in-depth analysis. Samsung AI lab researchers present a system that can animate heads with one-shot learning Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 15635

article-image-openssh-7-9-released
Prasad Ramesh
22 Oct 2018
3 min read
Save for later

OpenSSH 7.9 released

Prasad Ramesh
22 Oct 2018
3 min read
OpenSSH 7.9 has been released with some new features and bug fixes. There are new features like support for signalling sessions and client and server configs. In bug fixes, invalid format errors and bugs in closing connections are solved. New features in OpenSSH 7.9 Most port numbers are now allowed to be specified using service names from getservbyname(3). This is typically /etc/services. The IdentityAgent configuration directive is allowed to accept environment variable names. This adds the support to use multiple agent sockets without having to use fixed paths. Support is added for signalling sessions via the SSH protocol. However, only a limited subset of signals is supported. The support is only for login or command sessions and not subsystems that were exempt from a forced command via authorized_keys or sshd_config. Support for "ssh -Q sig" to list supported signature options is added. There is also "ssh -Q help" that will show the full set of supported queries. A CASignatureAlgorithms option is added for the client and server configs. It allows control over which signature formats are allowed for CAs to sign certificates. As an example, this allows to ban CAs that sign certificates using the RSA-SHA1 signature algorithm. Key revocation lists (KRLs) are allowed to revoke keys specified by SHA256 hash. Allowing creation of key revocation lists straight from base64-encoded SHA256 fingerprints. This supports removing keys using only the information contained in sshd(8) authentication log messages. Bug fixes in OpenSSH 7.9 ssh(1), ssh-keygen(1): Avoiding Spurious "invalid format" errors while attempting to load PEM private keys when using an incorrect passphrase. sshd(8): On receiving a channel closed message from a client, the stderr file descriptor and stdout are closed at the same time. Processes don’t stop anymore if they were waiting for stderr to close and were indifferent to the closing of stdin/out. ssh(1): You can now set ForwardX11Timeout=0 to disable the untrusted X11 forwarding timeout and support X11 forwarding endlessly. In previous versions, ForwardX11Timeout=0 was undefined. sshd(8): On compiling with GSSAPI support, cache supported method OIDs regardless of whether GSSAPI authentication is enabled in the main section of sshd_config. This behaviour avoids sandbox violations when GSSAPI authentication was enabled later in a Match block. sshd(8): Closing a connection does not failed when configuration is done with a text key revocation list that contains a very short key. ssh(1): Connections with specified ProxyJump are treated the same as ones with a ProxyCommand set with regards to hostname canonicalisation. This means that unless CanonicalizeHostname is set to 'always' the hostname should not be canonicalised. ssh(1): Fixed a regression in OpenSSH 7.8 that could prevent public- key authentication using certificates hosted in an ssh-agent(1) or against sshd(8) from OpenSSH 7.8 or newer. For more details, visit the OpenSSH website. How the Titan M chip will improve Android security IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 15565
article-image-rust-2018-edition-preview-2-is-here
Natasha Mathur
16 Aug 2018
2 min read
Save for later

Rust 2018 Edition Preview 2 is here!

Natasha Mathur
16 Aug 2018
2 min read
The Mozilla team announced Rust 2018 Edition Preview 2 today, the final release cycle before Rust 2018 goes into beta. The new release explores features such as the cargo fix command, NLL, along with other changes and improvements. Rust is a systems programming language by Mozilla. It is a "safe, concurrent, practical language", which supports functional and imperative-procedural paradigms. Rust provides better memory safety while still maintaining the performance. Let’s have a look at major features in Rust’s 2018 edition preview 2. The cargo fix command now comes as a built-in feature in Rust. This command is used during migration and addition of this new feature in Rust now further streamlines the migration process. Speaking of migration, extensive efforts have gone into improving and polishing the lints which help you migrate smoothly. Apart from that, the module system changes are now broken into several smaller features that help with independent tracking issues. There is no need of mod.rs anymore for parent modules Also, the extern crate is not needed anymore for including dependencies. Support has been provided for crate as a visibility modifier.   Another new addition in the Rust 2018 edition preview 2 is that NLL or Non-lexical lifetimes has been enabled by default, in migration mode. NLL improves the Rust compiler's ability to reason about lifetimes. It removes most of the remaining cases where people commonly experience the borrow checker rejecting valid programs. As per the Rust team, “If your code is accepted by NLL, then we accept it -- if your code is rejected by both NLL and the old borrow checker, then we reject it-- If your code is rejected by NLL but accepted by the old borrow checker, then we emit the new NLL errors as warnings”. In-band lifetimes have been split up in the latest release. Both rustfmt and the RLS have reached 1.0 “release candidate” status. For more information, check out the official release notes. Multithreading in Rust using Crates [Tutorial] Rust and Web Assembly announce ‘wasm-bindgen 0.2.16’ and the first release of ‘wasm-bindgen-futures’ Warp: Rust’s new web framework for implementing WAI (Web Application Interface)  
Read more
  • 0
  • 0
  • 15522

article-image-iso-c-committee-announces-that-c20-design-is-now-feature-complete
Bhagyashree R
25 Feb 2019
2 min read
Save for later

ISO C++ Committee announces that C++20 design is now feature complete

Bhagyashree R
25 Feb 2019
2 min read
Last week, as per the schedule, the ISO C++ Committee met in Kona, Hawaii to finalize the feature set for the next International Standard (IS), C++ 20. The committee has announced that C++20 is now feature complete and they are planning to finish the C++20 specification at the upcoming meeting, which is scheduled to happen in July 2019. Once the specification is complete they are planning to send the Committee Draft for review. Some of the features this draft include Modules With the introduction of modules, developers will not require to separate their files into header and source parts. The committee has now fixed internal linkage escaping modules. Coroutines The committee has gone through the coroutines proposals and has decided to go ahead with the specification. According to the specification of this feature, three keywords will be added: co_await, co_yield, and co_return. Contracts Contracts are made up of preconditions, postconditions, and assertions. These act as a basic mitigation measure when a program goes wrong because of some mismatch of expectations between parts of the programs. The committee is focused on refining the feature and renamed expects/ensures to pre/post. Concepts The concepts library include the definitions of fundamental library concepts, which are used for compile-time validation of template arguments and perform function dispatch on properties of types. Ranges The ranges library comes with components for dealing with ranges of elements including a variety of view adapters. To read the entire announcement, check out this Reddit thread. Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019 How to build Template Metaprogramming (TMP) using C++[Tutorial] Mio, a header-only C++11 memory mapping library, released!
Read more
  • 0
  • 0
  • 15514

article-image-github-octoverse-the-top-programming-languages-of-2018
Prasad Ramesh
19 Nov 2018
4 min read
Save for later

GitHub Octoverse: The top programming languages of 2018

Prasad Ramesh
19 Nov 2018
4 min read
After the GitHub Octoverse report last month, GitHub released an analysis of the top programming languages of 2018 on its platforms. There are various ways to rank the popularity of a programming language. In the report published on the GitHub Blog, the number of unique contributors to both public and private repositories tagged with the primary language was used. In addition, the number of repositories tagged with the appropriate primary programming language was also used. JavaScript is the top programming language by repositories The most number of repositories are created in JavaScript. The number of repositories created has a steady rise from 2012. Around this time, GitHub was housing nearly 1 million repositories in total. New JavaScript frameworks like Node.js were launched in 2009. This made it possible for developers to create client and server sides with the same code. Source: GitHub Blog JavaScript also has the most number of contributors JavaScript tops the list for the language having the most number contributors in public and private repositories. This is the case for organizations of every size in all regions of the world. New languages have also been on the rise on GitHub. In 2017, TypeScript entered the top 10 programming languages for all kinds of repositories across all regions. Projects like DefinitelyTyped help in using common JavaScript libraries with TypeScript which encourages its adoption. Some languages have also seen a decline in popularity. Ruby has sunk in the charts over the last couple of years. Even though the number of contributors in Ruby is on the rise, other languages like JavaScript and Python have grown faster. Newer projects not likely to be written in Ruby. This is especially true for projects owned by individual users or small organizations. Such projects are likely written in popular languages like JavaScript, Java, or Python. Source: GitHub Blog Languages by contributors in different regions Across regions, there haven’t been many variations in languages used. Ruby is at the bottom for all regions. TypeScript ranks higher in South America and Africa compared to North America and Europe. The reason could be the developer communities being relatively new in Africa and South America. The repositories in Africa and South America were younger than the repositories in North America and Europe. Fastest growing language by contributors PowerShell is climbing the list. Go also continues to grow across repository type with rank 7. It’s rank is 9 for open source repositories. Statically-typed languages which focus on type safety and interoperability like Kotlin, TypeScript, and Rust are growing quickly. So what makes a programming language popular on GitHub? There are three factors for top programming languages to climb ranks—type safety, interoperability, and being open source. Type safety: There’s a rise in static typing except for Python. This is because of the security and efficiency static typing offers individual developers and teams. The optional static typing in TypeScript adds safety. Kotlin, offers greater interactivity while creating trustworthy, type-safe programs. Interoperability: One of the reasons TypeScript climbed the rankings was due to its ability to coexist and integrate with JavaScript. Rust and Kotlin which are also on the rise, find built-in audiences in C and Java, respectively. Python developers can directly call Python APIs from Swift which displays its versatility and interoperability. Open source: These languages are also open source projects with active commits and changes. Strong communities that contribute, evolve, and create resources for languages can positively impact its life. For more details and charts, visit the GitHub Blog. What we learnt from the GitHub Octoverse 2018 Report Why does the C programming language refuse to die? Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 15505
article-image-python-serious-about-diversity-dumps-offensive-master-slave-terms-in-its-documentation
Natasha Mathur
13 Sep 2018
3 min read
Save for later

Python serious about diversity, dumps offensive ‘master’, ‘slave’ terms in its documentation

Natasha Mathur
13 Sep 2018
3 min read
Python is set on changing its “master” and “slave” terminology in its documentation and code. This is to conciliate the people claiming the terminology as offensive. A python developer at Red Hat, Victor Stinner, started a discussion titled “avoid master/slave terminology” on Python bug report, last week. The bug report discusses changing "master" and "slave" in Python documentation to terms such as "parent", "worker", or something similar, based on the complaints received “privately”. “For diversity reasons, it would be nice to try to avoid ‘master’ and ‘slave’ terminology which can be associated to slavery” mentioned Victor Stinner in the bug report. Not every Python developer who participated in this discussion agreed with Victor Stinner. One of the developers in the discussion, Larry Hastings, wrote “I'm a little surprised by this.  It's not like slavery was acceptable when these computer science terms were coined and it's only comparatively recently that they've gone out of fashion. On the other hand, there are some areas in computer software where "master" and "slave" are the exact technical terms (e.g. IDE), and avoiding them would lead to confusion”. Another Python developer, Terry J. Reedy wrote, “To me, there is nothing wrong with the word 'master', as such. I mastered Python to become a master of Python. Purging Python of 'master' seems ill-conceived. Like Larry, I object to action based on hidden evidence”. Python is not the only one who has been under Scrutiny. The Redis community, Django, and Drupal all faced the same issue. Drupal changed the terms "master" and "slave" for "primary" and "replica". Similarly, Django swapped "master" and "slave" for "leader" and "follower". To put an end to this debate about the use of this politically incorrect language, Guido Van Rossum, who resigned as “Benevolent dictator for life” or BDFL in July, but is still active as a core developer, was pulled back in. Guido ended the discussion by saying, “I'm closing this now. Three out of four of Victor's PRs have been merged. The fourth one should not be merged because it reflects the underlying terminology of UNIX ptys. There's a remaining quibble about "pliant children" -> "helpers" but that can be dealt with as a follow-up PR without keeping this discussion open”. The final commit on this is as follows: bpo-34605, pty: Avoid master/slave terms * pty.spawn(): rename master_read parameter to parent_read * Rename pty.slave_open() to pty.child_open(), but keep an pty.slave_open alis to pty.child_open for backward compatibility * os.openpty(), os.forkpty(): rename master_fd/slave_fd to parent_fd/child_fd * Rename internal variables: * Rename master_fd/slave_fd to parent_fd/child_fd * Rename slave_name to child_name For more information on the discussion, be sure to check out the official Python bug report. Why Guido van Rossum quit as the Python chief (BDFL) No new PEPS will be approved for Python in 2018, with BDFL election pending Python comes third in TIOBE popularity index for the first time
Read more
  • 0
  • 0
  • 15475

article-image-golang-1-11-is-here-with-modules-and-experimental-webassembly-port-among-other-updates
Natasha Mathur
27 Aug 2018
5 min read
Save for later

Golang 1.11 is here with modules and experimental WebAssembly port among other updates

Natasha Mathur
27 Aug 2018
5 min read
Golang 1.11 is here with modules and experimental WebAssembly port among other updates The Golang team released Golang 1.11 rc1 two weeks back, and now the much awaited Golang 1.11 is here. Golang 1.11, released last Friday, comes with changes and improvements to the toolchain, runtime, libraries, preliminary support for “modules”, and experimental port to WebAssembly. Golang is a modern programming language by Google, which was developed back in 2009 for application development. It’s simple syntax, concurrency support, and fast nature makes it one of the fastest growing languages in the software industry. Let’s now explore the new features in Golang 1.11. Ports Go 1.11 adds an experimental port to WebAssembly ( js/wasm ) along with other changes. Web Assembly Go 1.11 adds new GOOS value “js” and GOARCH value “wasm” to  WebAssembly. Go files named *_js.go or *_wasm.go will now be ignored by Go tools except for times when GOOS/GOARCH values are being used. The GOARCH name “wasm” is the official abbreviation of WebAssembly. The GOOS name “js” is due to the host environments like web browsers and Node.js, that executes the WebAssembly bytecode. Both of these host environments use JavaScript to embed WebAssembly. RISC-V GOARCH values reserved The main Go compiler does not provide support for the RISC-V architecture. Go 1.11 reserves the GOARCH values namely "riscv" and "riscv64" by Gccgo that supports RISC-V. This means that Go files named *_riscv.go will be ignored by Go tools except for cases when those GOOS/GOARCH values are being used. Other changes Go 1.11 now needs OpenBSD 6.2 or later, macOS 10.10 Yosemite or later, or Windows 7 or later. Any support for previous versions of these operating systems have been deprecated. It also offers support for the upcoming OpenBSD 6.4 release. With changes in the OpenBSD kernel, you won’t be able to run older versions of Go on OpenBSD 6.4. With Go 1.11, the new environment variable settings have been added for 64-bit MIPS systems, namely, GOMIPS64=hardfloat (the default) and GOMIPS64=softfloat. These enable you to decide whether to use hardware instructions or software emulation for floating-point computations. Go now uses a more efficient software floating point interface on soft float ARM systems (GOARM = 5). There is no need of a linux kernel configured with KUSER_HELPERS now on ARMv7. Toolchain There are also fixes made in Modules, packages, and debugging in Golang 1.11. Modules There’s now preliminary support added for a new experimental concept called “modules”, in Golang 1.11. This is an alternative to GOPATH with integrated support for versioning and package distribution. With the help of modules, developers are no longer limited to working inside GOPATH. Package loading There’s a new package, the golang.org/x/tools/go/packages that offers a simple API for locating and loading Go source code packages. It’s not yet part of the standard library but it effectively replaces the go/build package for many tasks. Build cache requirement Go 1.11 will be the last release which offers support for setting the environment variable GOCACHE=off ( to disable the build cache ), that was introduced in Go 1.10. The compiler toolchain now offers support for column information in line directives. Improved debugging The compiler in Go 1.11 now offers improved debugging for optimized binaries which includes variable location information, line numbers, and breakpoint locations. This makes it possible to debug binaries compiled without -N -l. There’s also experimental support added for calling Go functions from within a debugger. Compiler Toolchain Golang 1.11 offers support for column information in line directives. Also, a new package export data format is introduced which is transparent to end users, except for speeding up build times for large Go projects. Runtime Runtime in Go 1.11 now makes use of a sparse heap layout. This ensures that there is no longer a limit to the size of the Go heap as the limit was 512GiB earlier. It also provides fixing of rare "address space conflict" failures in mixed Go/C binaries or binaries compiled with -race. Library changes There are various minor updates and changes to the core library in Golang 1.11. Crypto: Crypto operations such as ecdsa.Sign, rsa.EncryptPKCS1v15 and rsa.GenerateKey, now randomly read an extra byte to ensure that tests don't rely on internal behavior. debug/elf: Constants such as ELFOSABI and EM have been added. encoding/asn1: There is now support for "private" class annotations for fields in Marshal and Unmarshal. image/gif: There is support for non-looping animated GIFs. They are denoted by having a LoopCount of -1. math/big: With Golang 1.11, ModInverse now returns nil when g and n are not relatively prime. Apart from these major updates, there are many other changes in Golang 1.11. To get more information, be sure to check the official Golang 1.11 release notes. Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial] GoMobile: GoLang’s Foray into the Mobile World  
Read more
  • 0
  • 0
  • 15441
Modal Close icon
Modal Close icon